@@ -297,7 +297,7 @@ In this context, we can define the notions of precision, recall and F-measure:
297297
298298 F_\beta = (1 + \beta ^2 ) \frac {\text {precision} \times \text {recall}}{\beta ^2 \text {precision} + \text {recall}}.
299299
300- Here some small examples in binary classification:
300+ Here some small examples in binary classification::
301301
302302 >>> from sklearn import metrics
303303 >>> y_pred = [0, 1, 0, 0]
@@ -411,7 +411,7 @@ their support
411411
412412 \texttt {weighted\_ {}F\_ {}beta}(y,\hat {y}) &= \frac {1 }{n_\text {samples}} \sum _{i=0 }^{n_\text {samples} - 1 } (1 + \beta ^2 )\frac {|y_i \cap \hat {y}_i|}{\beta ^2 |\hat {y}_i| + |y_i|}.
413413
414- Here an example where ``average `` is set to ``average `` to ``macro ``:
414+ Here an example where ``average `` is set to ``average `` to ``macro ``::
415415
416416 >>> from sklearn import metrics
417417 >>> y_true = [0, 1, 2, 0, 1, 2]
@@ -427,7 +427,7 @@ Here an example where ``average`` is set to ``average`` to ``macro``:
427427 >>> metrics.precision_recall_fscore_support(y_true, y_pred, average='macro') # doctest: +ELLIPSIS
428428 (0.22..., 0.33..., 0.26..., None)
429429
430- Here an example where ``average `` is set to to ``micro ``:
430+ Here an example where ``average `` is set to to ``micro ``::
431431
432432 >>> from sklearn import metrics
433433 >>> y_true = [0, 1, 2, 0, 1, 2]
@@ -443,7 +443,7 @@ Here an example where ``average`` is set to to ``micro``:
443443 >>> metrics.precision_recall_fscore_support(y_true, y_pred, average='micro') # doctest: +ELLIPSIS
444444 (0.33..., 0.33..., 0.33..., None)
445445
446- Here an example where ``average `` is set to to ``weighted ``:
446+ Here an example where ``average `` is set to to ``weighted ``::
447447
448448 >>> from sklearn import metrics
449449 >>> y_true = [0, 1, 2, 0, 1, 2]
@@ -459,7 +459,7 @@ Here an example where ``average`` is set to to ``weighted``:
459459 >>> metrics.precision_recall_fscore_support(y_true, y_pred, average='weighted') # doctest: +ELLIPSIS
460460 (0.22..., 0.33..., 0.26..., None)
461461
462- Here an example where ``average `` is set to ``None ``:
462+ Here an example where ``average `` is set to ``None ``::
463463
464464 >>> from sklearn import metrics
465465 >>> y_true = [0, 1, 2, 0, 1, 2]
@@ -492,7 +492,7 @@ value and :math:`w` is the predicted decisions as output by
492492 L_\text {Hinge}(y, w) = \max \left\{ 1 - wy, 0 \right \} = \left |1 - wy\right |_+
493493
494494 Here a small example demonstrating the use of the :func: `hinge_loss ` function
495- with a svm classifier:
495+ with a svm classifier::
496496
497497 >>> from sklearn import svm
498498 >>> from sklearn.metrics import hinge_loss
@@ -822,7 +822,7 @@ that can be used for model evaluation.
822822
823823One typical use case is to wrap an existing scoring function from the library
824824with non default value for its parameters such as the beta parameter for the
825- :func:fbeta_score function::
825+ :func: ` fbeta_score ` function::
826826
827827 >>> from sklearn.metrics import fbeta_score, Scorer
828828 >>> ftwo_scorer = Scorer(fbeta_score, beta=2)
0 commit comments