Skip to content

Commit 8db5750

Browse files
GaelVaroquauxamueller
authored andcommitted
DOC: minor rst issues
Plus I don't like capitalized first letters in titles
1 parent 03aa748 commit 8db5750

File tree

2 files changed

+9
-9
lines changed

2 files changed

+9
-9
lines changed

doc/modules/grid_search.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,10 +49,10 @@ combinations is retained.
4949
This can be done by using the :func:`cross_validation.train_test_split`
5050
utility function.
5151

52-
.. _gridsearch_scoring:
53-
5452
.. currentmodule:: sklearn.grid_search
5553

54+
.. _gridsearch_scoring:
55+
5656
Scoring functions for GridSearchCV
5757
----------------------------------
5858
By default, :class:`GridSearchCV` uses the ``score`` function of the estimator

doc/modules/model_evaluation.rst

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -297,7 +297,7 @@ In this context, we can define the notions of precision, recall and F-measure:
297297
298298
F_\beta = (1 + \beta^2) \frac{\text{precision} \times \text{recall}}{\beta^2 \text{precision} + \text{recall}}.
299299
300-
Here some small examples in binary classification:
300+
Here some small examples in binary classification::
301301

302302
>>> from sklearn import metrics
303303
>>> y_pred = [0, 1, 0, 0]
@@ -411,7 +411,7 @@ their support
411411
412412
\texttt{weighted\_{}F\_{}beta}(y,\hat{y}) &= \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (1 + \beta^2)\frac{|y_i \cap \hat{y}_i|}{\beta^2 |\hat{y}_i| + |y_i|}.
413413
414-
Here an example where ``average`` is set to ``average`` to ``macro``:
414+
Here an example where ``average`` is set to ``average`` to ``macro``::
415415

416416
>>> from sklearn import metrics
417417
>>> y_true = [0, 1, 2, 0, 1, 2]
@@ -427,7 +427,7 @@ Here an example where ``average`` is set to ``average`` to ``macro``:
427427
>>> metrics.precision_recall_fscore_support(y_true, y_pred, average='macro') # doctest: +ELLIPSIS
428428
(0.22..., 0.33..., 0.26..., None)
429429

430-
Here an example where ``average`` is set to to ``micro``:
430+
Here an example where ``average`` is set to to ``micro``::
431431

432432
>>> from sklearn import metrics
433433
>>> y_true = [0, 1, 2, 0, 1, 2]
@@ -443,7 +443,7 @@ Here an example where ``average`` is set to to ``micro``:
443443
>>> metrics.precision_recall_fscore_support(y_true, y_pred, average='micro') # doctest: +ELLIPSIS
444444
(0.33..., 0.33..., 0.33..., None)
445445

446-
Here an example where ``average`` is set to to ``weighted``:
446+
Here an example where ``average`` is set to to ``weighted``::
447447

448448
>>> from sklearn import metrics
449449
>>> y_true = [0, 1, 2, 0, 1, 2]
@@ -459,7 +459,7 @@ Here an example where ``average`` is set to to ``weighted``:
459459
>>> metrics.precision_recall_fscore_support(y_true, y_pred, average='weighted') # doctest: +ELLIPSIS
460460
(0.22..., 0.33..., 0.26..., None)
461461

462-
Here an example where ``average`` is set to ``None``:
462+
Here an example where ``average`` is set to ``None``::
463463

464464
>>> from sklearn import metrics
465465
>>> y_true = [0, 1, 2, 0, 1, 2]
@@ -492,7 +492,7 @@ value and :math:`w` is the predicted decisions as output by
492492
L_\text{Hinge}(y, w) = \max\left\{1 - wy, 0\right\} = \left|1 - wy\right|_+
493493
494494
Here a small example demonstrating the use of the :func:`hinge_loss` function
495-
with a svm classifier:
495+
with a svm classifier::
496496

497497
>>> from sklearn import svm
498498
>>> from sklearn.metrics import hinge_loss
@@ -822,7 +822,7 @@ that can be used for model evaluation.
822822

823823
One typical use case is to wrap an existing scoring function from the library
824824
with non default value for its parameters such as the beta parameter for the
825-
:func:fbeta_score function::
825+
:func:`fbeta_score` function::
826826

827827
>>> from sklearn.metrics import fbeta_score, Scorer
828828
>>> ftwo_scorer = Scorer(fbeta_score, beta=2)

0 commit comments

Comments
 (0)