Skip to content

Commit 04dda55

Browse files
authored
Fix doc markdown (#5732)
Fixed documentation markdown remarks for * MulticlassClassificationMetrics.LogLoss * MulticlassClassificationMetrics.LogLossReduction Signed-off-by: Robin Windey <[email protected]>
1 parent b02b6e1 commit 04dda55

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

src/Microsoft.ML.Data/Evaluators/Metrics/CalibratedBinaryClassificationMetrics.cs

+1-1
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ public sealed class CalibratedBinaryClassificationMetrics : BinaryClassification
3737
/// <remarks>
3838
/// <format type="text/markdown"><![CDATA[
3939
/// The log-loss reduction is scaled relative to a classifier that predicts the prior for every example:
40-
/// $LogLossReduction = \frac{LogLoss(prior) - LogLoss(classifier)}{LogLoss(prior)}
40+
/// $LogLossReduction = \frac{LogLoss(prior) - LogLoss(classifier)}{LogLoss(prior)}$
4141
/// This metric can be interpreted as the advantage of the classifier over a random prediction.
4242
/// For example, if the RIG equals 0.2, it can be interpreted as "the probability of a correct prediction is
4343
/// 20% better than random guessing".

src/Microsoft.ML.Data/Evaluators/Metrics/MulticlassClassificationMetrics.cs

+2-2
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ public sealed class MulticlassClassificationMetrics
2424
/// <remarks>
2525
/// <format type="text/markdown"><![CDATA[
2626
/// The log-loss metric is computed as follows:
27-
/// $LogLoss = - \frac{1}{m} \sum_{i = 1}^m log(p_i),
27+
/// $LogLoss = - \frac{1}{m} \sum_{i = 1}^m log(p_i)$,
2828
/// where $m$ is the number of instances in the test set and
2929
/// $p_i$ is the probability returned by the classifier
3030
/// of the instance belonging to the true class.
@@ -41,7 +41,7 @@ public sealed class MulticlassClassificationMetrics
4141
/// <remarks>
4242
/// <format type="text/markdown"><![CDATA[
4343
/// The log-loss reduction is scaled relative to a classifier that predicts the prior for every example:
44-
/// $LogLossReduction = \frac{LogLoss(prior) - LogLoss(classifier)}{LogLoss(prior)}
44+
/// $LogLossReduction = \frac{LogLoss(prior) - LogLoss(classifier)}{LogLoss(prior)}$
4545
/// This metric can be interpreted as the advantage of the classifier over a random prediction.
4646
/// For example, if the RIG equals 0.2, it can be interpreted as "the probability of a correct prediction is
4747
/// 20% better than random guessing".

0 commit comments

Comments
 (0)