Skip to content

Commit c098da7

Browse files
authored
First line of docstrings for models and optimizers (QB3#120)
1 parent 6664dfd commit c098da7

File tree

9 files changed

+25
-17
lines changed

9 files changed

+25
-17
lines changed

doc/api.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -58,11 +58,11 @@ Criterion
5858
Algorithms
5959
==========
6060

61-
:py:mod:`sparse_ho`:
61+
:py:mod:`sparse_ho.algo`:
6262

63-
.. currentmodule:: sparse_ho
63+
.. currentmodule:: sparse_ho.algo
6464

65-
.. automodule:: sparse_ho
65+
.. automodule:: sparse_ho.algo
6666
:no-members:
6767
:no-inherited-members:
6868

@@ -78,11 +78,11 @@ Algorithms
7878
Optimizers
7979
==========
8080

81-
:py:mod:`sparse_ho`:
81+
:py:mod:`sparse_ho.optimizers`:
8282

83-
.. currentmodule:: sparse_ho
83+
.. currentmodule:: sparse_ho.optimizers
8484

85-
.. automodule:: sparse_ho
85+
.. automodule:: sparse_ho.optimizers
8686
:no-members:
8787
:no-inherited-members:
8888

sparse_ho/models/enet.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111

1212

1313
class ElasticNet(BaseModel):
14-
"""Sparse ho ElasticNet model (inner problem)
14+
"""Sparse ho ElasticNet model (inner problem).
1515
1616
Parameters
1717
----------

sparse_ho/models/lasso.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,8 @@
1111

1212

1313
class Lasso(BaseModel):
14-
"""Linear Model trained with L1 prior as regularizer (aka the Lasso)
14+
"""Linear Model trained with L1 prior as regularizer (aka the Lasso).
15+
1516
The optimization objective for Lasso is:
1617
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
1718

sparse_ho/models/logreg.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99

1010
class SparseLogreg(BaseModel):
1111
"""Sparse Logistic Regression classifier.
12+
1213
The objective function is:
1314
1415
sum_1^n_samples log(1 + e^{-y_i x_i^T w}) + 1. / C * ||w||_1

sparse_ho/models/svm.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,11 @@
1212

1313

1414
class SVM(BaseModel):
15-
"""The support vector Machine classifier without bias
15+
"""Support Vector Machine classifier without bias.
16+
1617
The optimization problem is solved in the dual:
17-
1/2 r^T(y * X)(y * X)^T r - sum_i^n r_i
18-
s.t 0 <= r_i <= C
18+
1/2 r^T(y * X)(y * X)^T r - sum_i^n r_i
19+
s.t 0 <= r_i <= C
1920
2021
Parameters
2122
----------

sparse_ho/models/svr.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,8 @@ def _update_beta_jac_bcd_aux_sparse(data, indptr, indices, y, epsilon, beta,
8181

8282

8383
class SVR(BaseModel):
84-
"""The support vector regression without bias
84+
"""The support vector regression without bias.
85+
8586
The optimization problem is solved in the dual.
8687
8788
Parameters

sparse_ho/models/wlasso.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010

1111

1212
class WeightedLasso(BaseModel):
13-
r"""Linear Model trained with weighted L1 regularizer (aka weighted Lasso)
13+
r"""Linear Model trained with weighted L1 regularizer (aka weighted Lasso).
1414
1515
The optimization objective for weighted Lasso is:
1616

sparse_ho/optimizers/adam.py

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,16 +5,18 @@
55

66

77
class Adam(BaseOptimizer):
8-
"""This Adam code is taken from
8+
"""ADAM optimizer for the outer problem.
9+
10+
This Adam code is taken from
911
https://github.com/sagarvegad/Adam-optimizer/blob/master/Adam.py
1012
1113
Parameters
1214
----------
1315
n_outer: int, optional (default=100).
14-
number of maximum updates of alpha.
16+
Number of maximum updates of alpha.
1517
epsilon: float, optional (default=1e-3)
1618
lr: float, optional (default=1e-2)
17-
learning rate
19+
Learning rate
1820
beta_1: float, optional (default=0.9)
1921
beta_2: float, optional (default=0.999)
2022
verbose: bool, optional (default=False)

sparse_ho/optimizers/line_search.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,9 @@
55

66

77
class LineSearch(BaseOptimizer):
8-
"""This line-search code is taken from here:
8+
"""Gradient descent with line search for the outer problem.
9+
10+
The code is taken from here:
911
https://github.com/fabianp/hoag/blob/master/hoag/hoag.py
1012
1113
Parameters

0 commit comments

Comments
 (0)