Skip to content

Commit 5163413

Browse files
authored
Reference to See Also section for example of usage in all estimators (dotnet#3577)
1 parent e3c2043 commit 5163413

File tree

66 files changed

+133
-31
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+133
-31
lines changed

docs/api-reference/algo-details-fastforest.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,4 +28,6 @@ For more see:
2828
* [Quantile regression
2929
forest](http://jmlr.org/papers/volume7/meinshausen06a/meinshausen06a.pdf)
3030
* [From Stumps to Trees to
31-
Forests](https://blogs.technet.microsoft.com/machinelearning/2014/09/10/from-stumps-to-trees-to-forests/)
31+
Forests](https://blogs.technet.microsoft.com/machinelearning/2014/09/10/from-stumps-to-trees-to-forests/)
32+
33+
Check the See Also section for links to examples of the usage.

docs/api-reference/algo-details-fasttree.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,4 +35,6 @@ For more information see:
3535
* [Wikipedia: Gradient boosting (Gradient tree
3636
boosting).](https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting)
3737
* [Greedy function approximation: A gradient boosting
38-
machine.](https://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1013203451)
38+
machine.](https://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1013203451)
39+
40+
Check the See Also section for links to examples of the usage.

docs/api-reference/algo-details-gam.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,4 +15,6 @@ the average prediction over the training set, and the shape functions are
1515
normalized to represent the deviation from the average prediction. This results
1616
in models that are easily interpreted simply by inspecting the intercept and the
1717
shape functions. See the sample below for an example of how to train a GAM model
18-
and inspect and interpret the results.
18+
and inspect and interpret the results.
19+
20+
Check the See Also section for links to examples of the usage.

docs/api-reference/algo-details-lightgbm.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,5 @@ LightGBM is an open source implementation of gradient boosting decision tree.
33
For implementation details, please see [LightGBM's official
44
documentation](https://lightgbm.readthedocs.io/en/latest/index.html) or this
55
[paper](https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree.pdf).
6+
7+
Check the See Also section for links to examples of the usage.

docs/api-reference/algo-details-sdca.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,4 +57,6 @@ For more information, see:
5757
* [Scaling Up Stochastic Dual Coordinate
5858
Ascent.](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/main-3.pdf)
5959
* [Stochastic Dual Coordinate Ascent Methods for Regularized Loss
60-
Minimization.](http://www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf)
60+
Minimization.](http://www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf)
61+
62+
Check the See Also section for links to examples of the usage.

docs/api-reference/algo-details-sgd.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,6 @@ Hogwild Stochastic Gradient Descent for binary classification that supports
66
multi-threading without any locking. If the associated optimization problem is
77
sparse, Hogwild Stochastic Gradient Descent achieves a nearly optimal rate of
88
convergence. For more details about Hogwild Stochastic Gradient Descent can be
9-
found [here](http://arxiv.org/pdf/1106.5730v2.pdf).
9+
found [here](http://arxiv.org/pdf/1106.5730v2.pdf).
10+
11+
Check the See Also section for links to examples of the usage.

src/Microsoft.ML.Data/Transforms/ColumnConcatenatingEstimator.cs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ namespace Microsoft.ML.Transforms
3131
/// If the input columns' data type is a vector the output column data type remains the same. However, the size of
3232
/// the vector will be the sum of the sizes of the input vectors.
3333
///
34-
/// Check the See Also section for links to examples of the usage.
34+
/// Check the See Also section for links to usage examples.
3535
/// ]]></format>
3636
/// </remarks>
3737
/// <seealso cref="TransformExtensionsCatalog.Concatenate(TransformsCatalog, string, string[])"/>

src/Microsoft.ML.Data/Transforms/ColumnCopying.cs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ namespace Microsoft.ML.Transforms
4545
///
4646
/// The resulting [ColumnCopyingTransformer](xref:Microsoft.ML.Transforms.ColumnCopyingTransformer) creates a new column, named as specified in the output column name parameters, and
4747
/// copies the data from the input column to this new column.
48-
/// Check the See Also section for links to examples of the usage.
48+
/// Check the See Also section for links to usage examples.
4949
/// ]]>
5050
/// </format>
5151
/// </remarks>

src/Microsoft.ML.Data/Transforms/ColumnSelecting.cs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ namespace Microsoft.ML.Transforms
5353
/// In the case of serialization, every column in the schema will be written out. If there are columns
5454
/// that should not be saved, this estimator can be used to remove them.
5555
///
56-
/// Check the See Also section for links to examples of the usage.
56+
/// Check the See Also section for links to usage examples.
5757
/// ]]></format>
5858
/// </remarks>
5959
/// <seealso cref="TransformExtensionsCatalog.DropColumns(TransformsCatalog, string[])"/>

src/Microsoft.ML.Data/Transforms/FeatureContributionCalculationTransformer.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -284,6 +284,8 @@ private Delegate GetValueGetter<TSrc>(DataViewRow input, int colSrc)
284284
/// while keeping the other features constant. The contribution of feature F1 for the given example is the difference between the original score
285285
/// and the score obtained by taking the opposite decision at the node corresponding to feature F1. This algorithm extends naturally to models with
286286
/// many decision trees.
287+
///
288+
/// Check the See Also section for links to usage examples.
287289
/// ]]></format>
288290
/// </remarks>
289291
/// <seealso cref="ExplainabilityCatalog.CalculateFeatureContribution(TransformsCatalog, ISingleFeaturePredictionTransformer{ICalculateFeatureContribution}, int, int, bool)"/>

src/Microsoft.ML.Data/Transforms/Hashing.cs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1115,6 +1115,7 @@ public override void Process()
11151115
/// | Input column data type | Vector or scalars of numeric, boolean, [text](xref:Microsoft.ML.Data.TextDataViewType), [DateTime](xref: System.DateTime) and [key](xref:Microsoft.ML.Data.KeyDataViewType) type. |
11161116
/// | Output column data type | Vector or scalar [key](xref:Microsoft.ML.Data.KeyDataViewType) type. |
11171117
///
1118+
/// Check the See Also section for links to usage examples.
11181119
/// ]]></format>
11191120
/// </remarks>
11201121
/// <seealso cref="ConversionsExtensionsCatalog.Hash(TransformsCatalog.ConversionTransforms, string, string, int, int)"/>

src/Microsoft.ML.Data/Transforms/KeyToValue.cs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -512,6 +512,7 @@ public override JToken SavePfa(BoundPfaContext ctx, JToken srcToken)
512512
/// | Input column data type | [key](xref:Microsoft.ML.Data.KeyDataViewType) type. |
513513
/// | Output column data type | Type of the original data, prior to converting to [key](xref:Microsoft.ML.Data.KeyDataViewType) type. |
514514
///
515+
/// Check the See Also section for links to usage examples.
515516
/// ]]></format>
516517
/// </remarks>
517518
/// <seealso cref="ConversionsExtensionsCatalog.MapKeyToValue(TransformsCatalog.ConversionTransforms, InputOutputColumnPair[])"/>

src/Microsoft.ML.Data/Transforms/KeyToVector.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -737,6 +737,8 @@ private bool SaveAsOnnxCore(OnnxContext ctx, int iinfo, ColInfo info, string src
737737
///
738738
/// It iterates over keys in data, and for each key it produces vector of key cardinality filled with zeros except position of key value in which it put's `1.0`.
739739
/// For vector of keys it can either produce vector of counts for each key or concatenate them together into one vector.
740+
///
741+
/// Check the See Also section for links to usage examples.
740742
/// ]]></format>
741743
/// </remarks>
742744
/// <seealso cref=" ConversionsExtensionsCatalog.MapKeyToVector(TransformsCatalog.ConversionTransforms, InputOutputColumnPair[], bool)"/>

src/Microsoft.ML.Data/Transforms/Normalizer.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,8 @@ namespace Microsoft.ML.Transforms
6060
/// * [NormalizeLogMeanVariance](xref:Microsoft.ML.NormalizationCatalog.NormalizeLogMeanVariance(Microsoft.ML.TransformsCatalog,System.String,System.String,System.Int64,System.Boolean))
6161
/// * [NormalizeBinning](xref:Microsoft.ML.NormalizationCatalog.NormalizeBinning(Microsoft.ML.TransformsCatalog,System.String,System.String,System.Int64,System.Boolean,System.Int32))
6262
/// * [NormalizeSupervisedBinning](xref:Microsoft.ML.NormalizationCatalog.NormalizeSupervisedBinning(Microsoft.ML.TransformsCatalog,System.String,System.String,System.String,System.Int64,System.Boolean,System.Int32,System.Int32))
63+
///
64+
/// Check the above links for usage examples.
6365
/// ]]>
6466
/// </format>
6567
/// </remarks>

src/Microsoft.ML.Data/Transforms/TypeConverting.cs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -526,6 +526,7 @@ private bool SaveAsOnnxCore(OnnxContext ctx, int iinfo, string srcVariableName,
526526
/// | Input column data type | Vector or primitive numeric, boolean, [text](xref:Microsoft.ML.Data.TextDataViewType), [System.DateTime](xref:System.DateTime) and [key](xref:Microsoft.ML.Data.KeyDataViewType) type. |
527527
/// | Output column data type | Vector or primitive numeric, boolean, [text](xref:Microsoft.ML.Data.TextDataViewType), [System.DateTime](xref:System.DateTime) and [key](xref:Microsoft.ML.Data.KeyDataViewType) type. |
528528
///
529+
/// Check the See Also section for links to usage examples.
529530
/// ]]></format>
530531
/// </remarks>
531532
/// <seealso cref="ConversionsExtensionsCatalog.ConvertType(TransformsCatalog.ConversionTransforms, InputOutputColumnPair[], DataKind)"/>

src/Microsoft.ML.Data/Transforms/ValueMapping.cs

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,8 @@ namespace Microsoft.ML.Transforms
5656
///
5757
/// Values can be repeated to allow for multiple keys to map to the same value, however keys can not be repeated. The mapping between keys and values
5858
/// can be specified either through lists, where the key list and value list must be the same size or can be done through an [System.IDataView](xref:Microsoft.ML.IDataView).
59+
///
60+
/// Check the See Also section for links to usage examples.
5961
/// ]]></format>
6062
/// </remarks>
6163
/// <seealso cref="ConversionsExtensionsCatalog.MapValue(TransformsCatalog.ConversionTransforms, string, IDataView, DataViewSchema.Column, DataViewSchema.Column, string)"/>
@@ -152,6 +154,8 @@ public sealed override SchemaShape GetOutputSchema(SchemaShape inputSchema)
152154
///
153155
/// Values can be repeated to allow for multiple keys to map to the same value, however keys can not be repeated. The mapping between keys and values
154156
/// can be specified either through lists, where the key list and value list must be the same size or can be done through an [System.IDataView](xref:Microsoft.ML.IDataView).
157+
///
158+
/// Check the See Also section for links to usage examples.
155159
/// ]]></format>
156160
/// </remarks>
157161
/// <typeparam name="TKey">Specifies the key type.</typeparam>

src/Microsoft.ML.Data/Transforms/ValueToKeyMappingEstimator.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,8 @@ namespace Microsoft.ML.Transforms
2929
/// If the key is not found in the dictionary, it is assigned the missing value indicator.
3030
/// This dictionary mapping values to keys is most commonly learnt from the unique values in input data,
3131
/// but can be defined through other means: either with the mapping defined, or as loaded from an external file.
32+
///
33+
/// Check the See Also section for links to usage examples.
3234
/// ]]></format>
3335
/// </remarks>
3436
/// <seealso cref="ConversionsExtensionsCatalog.MapValueToKey(TransformsCatalog.ConversionTransforms, InputOutputColumnPair[], int, KeyOrdinality, bool, IDataView)"/>

src/Microsoft.ML.FastTree/FastTreeTweedie.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,8 @@ namespace Microsoft.ML.Trainers.FastTree
5050
/// For an introduction to Gradient Boosting, and more information, see:
5151
/// [Wikipedia: Gradient boosting(Gradient tree boosting)](https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting) or
5252
/// [Greedy function approximation: A gradient boosting machine](https://projecteuclid.org/DPubS?service=UI&amp;version=1.0&amp;verb=Display&amp;handle=euclid.aos/1013203451).
53+
///
54+
/// Check the See Also section for links to usage examples.
5355
/// ]]>
5456
/// </format>
5557
/// </remarks>

src/Microsoft.ML.ImageAnalytics/ImageGrayscale.cs

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -236,7 +236,8 @@ protected override Delegate MakeGetter(DataViewRow input, int iinfo, Func<int, b
236236
/// a technique known as [data augmentation](http://www.stat.harvard.edu/Faculty_Content/meng/JCGS01.pdf).
237237
/// For end-to-end image processing pipelines, and scenarios in your applications, see the
238238
/// [examples](https://github.com/dotnet/machinelearning-samples/tree/master/samples/csharp/getting-started) in the machinelearning-samples github repository.
239-
/// Check the See Also section for links to more examples of the usage.
239+
///
240+
/// Check the See Also section for links to usage examples.
240241
/// ]]>
241242
/// </format>
242243
/// </remarks>

src/Microsoft.ML.ImageAnalytics/ImageLoader.cs

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -240,7 +240,8 @@ protected override DataViewSchema.DetachedColumn[] GetOutputColumnsCore()
240240
/// The images to load need to be in the formats supported by <xref:System.Drawing.Bitmap>.
241241
/// For end-to-end image processing pipelines, and scenarios in your applications, see the
242242
/// [examples](https://github.com/dotnet/machinelearning-samples/tree/master/samples/csharp/getting-started) in the machinelearning-samples github repository.</a>
243-
/// Check the See Also section for links to examples of the usage.
243+
///
244+
/// Check the See Also section for links to usage examples.
244245
/// ]]>
245246
/// </format>
246247
/// </remarks>

src/Microsoft.ML.ImageAnalytics/ImagePixelExtractor.cs

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -488,7 +488,8 @@ private VectorDataViewType[] ConstructTypes()
488488
/// converts image into vector of known size of floats or bytes. Size and data type depends on specified paramaters.
489489
/// For end-to-end image processing pipelines, and scenarios in your applications, see the
490490
/// [examples](https://github.com/dotnet/machinelearning-samples/tree/master/samples/csharp/getting-started) in the machinelearning-samples github repository.
491-
/// Check the See Also section for links to examples of the usage.
491+
///
492+
/// Check the See Also section for links to usage examples.
492493
/// ]]>
493494
/// </format>
494495
/// </remarks>

src/Microsoft.ML.ImageAnalytics/ImageResizer.cs

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -419,7 +419,8 @@ protected override Delegate MakeGetter(DataViewRow input, int iinfo, Func<int, b
419419
/// further processing.
420420
/// For end-to-end image processing pipelines, and scenarios in your applications, see the
421421
/// [examples](https://github.com/dotnet/machinelearning-samples/tree/master/samples/csharp/getting-started) in the machinelearning-samples github repository.
422-
/// Check the See Also section for links to examples of the usage.
422+
///
423+
/// Check the See Also section for links to usage examples.
423424
/// ]]>
424425
/// </format>
425426
/// </remarks>

src/Microsoft.ML.ImageAnalytics/VectorToImageTransform.cs

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -437,7 +437,8 @@ private static ImageDataViewType[] ConstructTypes(VectorToImageConvertingEstimat
437437
///
438438
/// The resulting <xref:Microsoft.ML.Transforms.Image.VectorToImageConvertingTransformer> creates a new column, named as specified in the output column name parameters, and
439439
/// creates image from the data in the input column to this new column.
440-
/// Check the See Also section for links to examples of the usage.
440+
///
441+
/// Check the See Also section for links to usage examples.
441442
/// ]]>
442443
/// </format>
443444
/// </remarks>

src/Microsoft.ML.KMeansClustering/KMeansPlusPlusTrainer.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,8 @@ namespace Microsoft.ML.Trainers
6969
/// For more information on K-means, and K-means++ see:
7070
/// [K-means](https://en.wikipedia.org/wiki/K-means_clustering)
7171
/// [K-means++](https://en.wikipedia.org/wiki/K-means%2b%2b)
72+
///
73+
/// Check the See Also section for links to usage examples.
7274
/// ]]>
7375
/// </format>
7476
/// </remarks>

src/Microsoft.ML.Mkl.Components/OlsLinearRegression.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,8 @@ namespace Microsoft.ML.Trainers
5454
/// [Ordinary least squares (OLS)](https://en.wikipedia.org/wiki/Ordinary_least_squares) is a parameterized regression method.
5555
/// It assumes that the conditional mean of the dependent variable follows a linear function of the dependent variables.
5656
/// The regression parameters can be estimated by minimizing the squares of the difference between observed values and the predictions
57+
///
58+
/// Check the See Also section for links to usage examples.
5759
/// ]]>
5860
/// </format>
5961
/// </remarks>

src/Microsoft.ML.Mkl.Components/SymSgdClassificationTrainer.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,8 @@ namespace Microsoft.ML.Trainers
6060
/// to produce the same result as what a sequential symbolic stochastic gradient descent would have produced, in expectation.
6161
///
6262
/// For more information see [Parallel Stochastic Gradient Descent with Sound Combiners](https://arxiv.org/abs/1705.08030).
63+
///
64+
/// Check the See Also section for links to usage examples.
6365
/// ]]>
6466
/// </format>
6567
/// </remarks>

src/Microsoft.ML.OnnxTransformer/DnnImageFeaturizerTransform.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -73,6 +73,8 @@ internal DnnImageFeaturizerInput(string outputColumnName, string inputColumnName
7373
/// the names of the ONNX model nodes.
7474
///
7575
/// Any platform requirement for this estimator will follow the requirements on the <xref:Microsoft.ML.Transforms.Onnx.OnnxScoringEstimator>.
76+
///
77+
/// Check the See Also section for links to usage examples.
7678
/// ]]>
7779
/// </format>
7880
/// </remarks>

src/Microsoft.ML.OnnxTransformer/OnnxTransform.cs

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,8 @@ namespace Microsoft.ML.Transforms.Onnx
5959
///
6060
/// To create this estimator use the following:
6161
/// [ApplyOnnxModel](xref:Microsoft.ML.OnnxCatalog.ApplyOnnxModel*)
62+
///
63+
/// Check the See Also section for links to usage examples.
6264
/// ]]>
6365
/// </format>
6466
/// </remarks>
@@ -541,6 +543,8 @@ public NamedOnnxValue GetNamedOnnxValue()
541543
///
542544
/// To create this estimator use the following:
543545
/// [ApplyOnnxModel](xref:Microsoft.ML.OnnxCatalog.ApplyOnnxModel*)
546+
///
547+
/// Check the See Also section for links to usage examples.
544548
/// ]]>
545549
/// </format>
546550
/// </remarks>

src/Microsoft.ML.PCA/PcaTrainer.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,8 @@ namespace Microsoft.ML.Trainers
7070
///
7171
/// Note that the algorithm can be made into Kernel PCA by applying the <xref:Microsoft.ML.Transforms.ApproximatedKernelTransformer>
7272
/// to the data before passing it to the trainer.
73+
///
74+
/// Check the See Also section for links to usage examples.
7375
/// ]]>
7476
/// </format>
7577
/// </remarks>

src/Microsoft.ML.Recommender/MatrixFactorizationTrainer.cs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -102,6 +102,7 @@ namespace Microsoft.ML.Trainers
102102
/// * For the parallel coordinate descent method used and one-class matrix factorization formula, see [Selection of Negative Samples for One-class Matrix Factorization](https://www.csie.ntu.edu.tw/~cjlin/papers/one-class-mf/biased-mf-sdm-with-supp.pdf).
103103
/// * For details in the underlying library used, see [LIBMF: A Library for Parallel Matrix Factorization in Shared-memory Systems](https://www.csie.ntu.edu.tw/~cjlin/papers/libmf/libmf_open_source.pdf).
104104
///
105+
/// Check the See Also section for links to usage examples.
105106
/// ]]>
106107
/// </format>
107108
/// </remarks>

src/Microsoft.ML.StandardTrainers/FactorizationMachine/FactorizationMachineTrainer.cs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -81,6 +81,7 @@ namespace Microsoft.ML.Trainers
8181
/// Algorithm details is described in Algorithm 3 in [this online document](https://github.com/wschin/fast-ffm/blob/master/fast-ffm.pdf).
8282
/// The minimized loss function is [logistic loss](https://en.wikipedia.org/wiki/Loss_functions_for_classification), so the trained model can be viewed as a non-linear logistic regression.
8383
///
84+
/// Check the See Also section for links to usage examples.
8485
/// ]]>
8586
/// </format>
8687
/// </remarks>

src/Microsoft.ML.StandardTrainers/Standard/LogisticRegression/LogisticRegression.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,8 @@ namespace Microsoft.ML.Trainers
7070
///
7171
/// An aggressive regularization (that is, assigning large coefficients to L1-norm or L2-norm regularization terms) can harm predictive capacity by excluding important variables out of the model.
7272
/// Therefore, choosing the right regularization coefficients is important when applying logistic regression.
73+
///
74+
/// Check the See Also section for links to usage examples.
7375
/// ]]>
7476
/// </format>
7577
/// </remarks>

src/Microsoft.ML.StandardTrainers/Standard/LogisticRegression/MulticlassLogisticRegression.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -90,6 +90,8 @@ namespace Microsoft.ML.Trainers
9090
///
9191
/// An aggressive regularization (that is, assigning large coefficients to L1-norm or L2-norm regularization terms) can harm predictive capacity by excluding important variables out of the model.
9292
/// Therefore, choosing the right regularization coefficients is important when applying maximum entropy classifier.
93+
///
94+
/// Check the See Also section for links to usage examples.
9395
/// ]]>
9496
/// </format>
9597
/// </remarks>

src/Microsoft.ML.StandardTrainers/Standard/MulticlassClassification/MulticlassNaiveBayesTrainer.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,8 @@ namespace Microsoft.ML.Trainers
5555
/// This multi-class trainer accepts "binary" feature values of type float:
5656
/// feature values that are greater than zero are treated as `true` and feature values
5757
/// that are less or equal to 0 are treated as `false`.
58+
///
59+
/// Check the See Also section for links to usage examples.
5860
/// ]]>
5961
/// </format>
6062
/// </remarks>

src/Microsoft.ML.StandardTrainers/Standard/MulticlassClassification/OneVersusAllTrainer.cs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,8 @@ namespace Microsoft.ML.Trainers
7676
/// requires that the trainer store a lot more intermediate state in the form of
7777
/// L-BFGS history for all classes *simultaneously*, rather than just one-by-one
7878
/// as would be needed for a one-versus-all classification model.
79+
///
80+
/// Check the See Also section for links to usage examples.
7981
/// ]]>
8082
/// </format>
8183
/// </remarks>

0 commit comments

Comments
 (0)