Skip to content

Commit e38f9af

Browse files
committed
repo name
1 parent e1c00d8 commit e38f9af

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

80 files changed

+121
-121
lines changed

docs/capsule_networks/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ <h1>Capsule Networks</h1>
7878
<p>I used <a href="https://github.com/jindongwang/Pytorch-CapsuleNet">jindongwang/Pytorch-CapsuleNet</a> to clarify some
7979
confusions I had with the paper.</p>
8080
<p>Here&rsquo;s a notebook for training a Capsule Network on MNIST dataset.</p>
81-
<p><a href="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/capsule_networks/mnist.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
81+
<p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/capsule_networks/mnist.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
8282
<a href="https://app.labml.ai/run/e7c08e08586711ebb3e30242ac1c0002"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a></p>
8383
</div>
8484
<div class='code'>

docs/capsule_networks/readme.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ <h1><a href="https://nn.labml.ai/capsule_networks/index.html">Capsule Networks</
7878
<p>I used <a href="https://github.com/jindongwang/Pytorch-CapsuleNet">jindongwang/Pytorch-CapsuleNet</a> to clarify some
7979
confusions I had with the paper.</p>
8080
<p>Here&rsquo;s a notebook for training a Capsule Network on MNIST dataset.</p>
81-
<p><a href="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/capsule_networks/mnist.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
81+
<p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/capsule_networks/mnist.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
8282
<a href="https://app.labml.ai/run/e7c08e08586711ebb3e30242ac1c0002"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a></p>
8383
</div>
8484
<div class='code'>

docs/cfr/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ <h1>Regret Minimization in Games with Incomplete Information (CFR)</h1>
7878
where we sample from the game tree and estimate the regrets.</p>
7979
<p>We tried to keep our Python implementation easy-to-understand like a tutorial.
8080
We run it on <a href="kuhn/index.html">a very simple imperfect information game called Kuhn poker</a>.</p>
81-
<p><a href="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/cfr/kuhn/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a></p>
81+
<p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/cfr/kuhn/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a></p>
8282
<p><a href="https://twitter.com/labmlai/status/1407186002255380484"><img alt="Twitter thread" src="https://img.shields.io/twitter/url?style=social&amp;url=https%3A%2F%2Ftwitter.com%2Flabmlai%2Fstatus%2F1407186002255380484" /></a>
8383
Twitter thread</p>
8484
<h2>Introduction</h2>

docs/cfr/kuhn/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ <h1><a href="../index.html">Counterfactual Regret Minimization (CFR)</a> on Kuhn
8888
</ul>
8989
<p>He we extend the <code>InfoSet</code> class and <code>History</code> class defined in <a href="../index.html"><code>__init__.py</code></a>
9090
with Kuhn Poker specifics.</p>
91-
<p><a href="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/cfr/kuhn/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
91+
<p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/cfr/kuhn/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
9292
<a href="https://app.labml.ai/run/7c35d3fad29711eba588acde48001122"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a></p>
9393
</div>
9494
<div class='code'>

docs/gan/cycle_gan/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ <h1>Cycle GAN</h1>
8484
The discriminators test whether the generated images look real.</p>
8585
<p>This file contains the model code as well as the training code.
8686
We also have a Google Colab notebook.</p>
87-
<p><a href="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/gan/cycle_gan/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
87+
<p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/gan/cycle_gan/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
8888
<a href="https://app.labml.ai/run/93b11a665d6811ebaac80242ac1c0002"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a></p>
8989
</div>
9090
<div class='code'>

docs/gan/wasserstein/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ <h1>Wasserstein GAN (WGAN)</h1>
133133
while keeping $K$ bounded. <em>One way to keep $K$ bounded is to clip all weights in the neural
134134
network that defines $f$ clipped within a range.</em></p>
135135
<p>Here is the code to try this on a <a href="experiment.html">simple MNIST generation experiment</a>.</p>
136-
<p><a href="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/gan/wasserstein/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a></p>
136+
<p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/gan/wasserstein/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a></p>
137137
</div>
138138
<div class='code'>
139139
<div class="highlight"><pre><span class="lineno">87</span><span></span><span class="kn">import</span> <span class="nn">torch.utils.data</span>

docs/hypernetworks/hyper_lstm.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ <h1>HyperNetworks - HyperLSTM</h1>
7474
by David Ha gives a good explanation of HyperNetworks.</p>
7575
<p>We have an experiment that trains a HyperLSTM to predict text on Shakespeare dataset.
7676
Here&rsquo;s the link to code: <a href="experiment.html"><code>experiment.py</code></a></p>
77-
<p><a href="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/hypernetworks/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
77+
<p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/hypernetworks/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
7878
<a href="https://app.labml.ai/run/9e7f39e047e811ebbaff2b26e3148b3d"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a></p>
7979
<p>HyperNetworks use a smaller network to generate weights of a larger network.
8080
There are two variants: static hyper-networks and dynamic hyper-networks.

docs/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@
6868
<h1><a href="index.html">labml.ai Annotated PyTorch Paper Implementations</a></h1>
6969
<p>This is a collection of simple PyTorch implementations of
7070
neural networks and related algorithms.
71-
<a href="https://github.com/lab-ml/nn">These implementations</a> are documented with explanations,
71+
<a href="https://github.com/labmlai/annotated_deep_learning_paper_implementations">These implementations</a> are documented with explanations,
7272
and the <a href="index.html">website</a>
7373
renders these as side-by-side formatted notes.
7474
We believe these would help you understand these algorithms better.</p>

docs/normalization/batch_channel_norm/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ <h1>Batch-Channel Normalization</h1>
7777
batch normalization.</p>
7878
<p>Here is <a href="../weight_standardization/experiment.html">the training code</a> for training
7979
a VGG network that uses weight standardization to classify CIFAR-10 data.</p>
80-
<p><a href="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/normalization/weight_standardization/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
80+
<p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/normalization/weight_standardization/experiment.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
8181
<a href="https://app.labml.ai/run/f4a783a2a7df11eb921d0242ac1c0002"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a>
8282
<a href="https://wandb.ai/vpj/cifar10/runs/3flr4k8w"><img alt="WandB" src="https://img.shields.io/badge/wandb-run-yellow" /></a></p>
8383
</div>

docs/normalization/batch_norm/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ <h2>Inference</h2>
132132
mean and variance during the training phase and use that for inference.</p>
133133
<p>Here&rsquo;s <a href="mnist.html">the training code</a> and a notebook for training
134134
a CNN classifier that uses batch normalization for MNIST dataset.</p>
135-
<p><a href="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/normalization/batch_norm/mnist.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
135+
<p><a href="https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/normalization/batch_norm/mnist.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
136136
<a href="https://app.labml.ai/run/011254fe647011ebbb8e0242ac1c0002"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a></p>
137137
</div>
138138
<div class='code'>

0 commit comments

Comments
 (0)