All the experiments presented in the article can be reproduced using the
XP*-*.py files.
XP1-1_sd1_dirichlet.pyreproduces the study of the dirichlet kernel in sparse deconvolution.XP1-2_sd1_gaussian.pyreproduces the study of the gaussian kernel in sparse deconvolution.XP1-3_sd1_lambda.pyreproduces the study of the lambda in sparse deconvolution.XP1-4_sd1_initialization.pyreproduces the study of initialization in sparse deconvolution.
XP2-1_tln_relu_squared.pyreproduces the study of the relu activation and squared loss in the two-layer network exampleXP2-2_tln_lambda.pyreproduces the study of lambda in the two-layer network exampleXP2-3_tln_initialization.pyreproduces the study of initialization in the two-layer network exampleXP2-4_tln_relu_logistic.pyreproduces the study of the relu activation and logistic loss in the two-layer network example
The core classes for each of the two examples are implemented in:
sparse_deconvolution_1D.pyfor the sparse deconvolution example.two_layer_nn.pyfor the two-layer neural network example.
The forward-backward algorithm and the stochastic gradient descent algorithm are implemented in
optimizer.py.
Other files implement miscelanious functions and classes:
activations.pythe activation function classes.kernels.pythe kernel classes.losses.pythe loss function classes.parameters.pythe classes structuring the parameters of each example, the common parameters and the custom parameters for each experiment.plot.pythe functions used to plot the results.tests.pysome tests.requirements.txtthe required package for reproducting the examples.requirements_test.txtthe additional packages required for testing.
[1] Lenaic Chizat and Francis Bach. On the Global Convergence of Gra-dient Descent for Over-parameterized Models using Optimal Transport.