Skip to content

L1-norm and L2-norm regularization doc #3586

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
May 28, 2019
Prev Previous commit
Explain ER and loss
  • Loading branch information
wschin committed May 24, 2019
commit 0c7d9f9cfd21bf1447c29d652266f7526e086006
1 change: 1 addition & 0 deletions docs/api-reference/regularization-l1-l2.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
This class uses [empricial risk minimization](https://en.wikipedia.org/wiki/Empirical_risk_minimization) (i.e., ERM)
Copy link

@shmoradims shmoradims May 16, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This [](start = 0, length = 5)

Please add a header '### Regularization' so that the following text becomes a separate section. Also move it after all the algo details.
#ByDesign

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. I don't only mean regularization. It is a brief introduction to the whole optimization problem.


In reply to: 284914354 [](ancestors = 284914354)

to formulate the optimization problem built upon collected data.
Note that empricial risk is usually measured by applying a loss function on the model's predictions on collected data points.
If the training data does not contain enough data points
(for example, to train a linear model in $n$-dimensional space, we need at least $n$ data points),
[overfitting](https://en.wikipedia.org/wiki/Overfitting) may happen so that
Expand Down