Skip to content

Commit ccb9ee2

Browse files
committed
typos in readmes
1 parent 3b1e75d commit ccb9ee2

File tree

8 files changed

+142
-138
lines changed

8 files changed

+142
-138
lines changed

docs/normalization/batch_norm/readme.html

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -82,11 +82,11 @@ <h3>Internal Covariate Shift</h3>
8282
For example, let&rsquo;s say there are two layers $l_1$ and $l_2$.
8383
During the beginning of the training $l_1$ outputs (inputs to $l_2$)
8484
could be in distribution $\mathcal{N}(0.5, 1)$.
85-
Then, after some training steps, it could move to $\mathcal{N}(0.5, 1)$.
85+
Then, after some training steps, it could move to $\mathcal{N}(0.6, 1.5)$.
8686
This is <em>internal covariate shift</em>.</p>
8787
<p>Internal covariate shift will adversely affect training speed because the later layers
88-
($l_2$ in the above example) has to adapt to this shifted distribution.</p>
89-
<p>By stabilizing the distribution batch normalization minimizes the internal covariate shift.</p>
88+
($l_2$ in the above example) have to adapt to this shifted distribution.</p>
89+
<p>By stabilizing the distribution, batch normalization minimizes the internal covariate shift.</p>
9090
<h2>Normalization</h2>
9191
<p>It is known that whitening improves training speed and convergence.
9292
<em>Whitening</em> is linearly transforming inputs to have zero mean, unit variance,
@@ -95,9 +95,9 @@ <h3>Normalizing outside gradient computation doesn&rsquo;t work</h3>
9595
<p>Normalizing outside the gradient computation using pre-computed (detached)
9696
means and variances doesn&rsquo;t work. For instance. (ignoring variance), let
9797
<script type="math/tex; mode=display">\hat{x} = x - \mathbb{E}[x]</script>
98-
where $x = u + b$ and $b$ is a trained bias.
99-
and $\mathbb{E}[x]$ is outside gradient computation (pre-computed constant).</p>
100-
<p>Note that $\hat{x}$ has no effect of $b$.
98+
where $x = u + b$ and $b$ is a trained bias
99+
and $\mathbb{E}[x]$ is an outside gradient computation (pre-computed constant).</p>
100+
<p>Note that $\hat{x}$ has no effect on $b$.
101101
Therefore,
102102
$b$ will increase or decrease based
103103
$\frac{\partial{\mathcal{L}}}{\partial x}$,
@@ -106,14 +106,14 @@ <h3>Normalizing outside gradient computation doesn&rsquo;t work</h3>
106106
<h3>Batch Normalization</h3>
107107
<p>Whitening is computationally expensive because you need to de-correlate and
108108
the gradients must flow through the full whitening calculation.</p>
109-
<p>The paper introduces simplified version which they call <em>Batch Normalization</em>.
109+
<p>The paper introduces a simplified version which they call <em>Batch Normalization</em>.
110110
First simplification is that it normalizes each feature independently to have
111111
zero mean and unit variance:
112112
<script type="math/tex; mode=display">\hat{x}^{(k)} = \frac{x^{(k)} - \mathbb{E}[x^{(k)}]}{\sqrt{Var[x^{(k)}]}}</script>
113113
where $x = (x^{(1)} &hellip; x^{(d)})$ is the $d$-dimensional input.</p>
114114
<p>The second simplification is to use estimates of mean $\mathbb{E}[x^{(k)}]$
115115
and variance $Var[x^{(k)}]$ from the mini-batch
116-
for normalization; instead of calculating the mean and variance across whole dataset.</p>
116+
for normalization; instead of calculating the mean and variance across the whole dataset.</p>
117117
<p>Normalizing each feature to zero mean and unit variance could affect what the layer
118118
can represent.
119119
As an example paper illustrates that, if the inputs to a sigmoid are normalized
@@ -126,17 +126,17 @@ <h3>Batch Normalization</h3>
126126
like $Wu + b$ the bias parameter $b$ gets cancelled due to normalization.
127127
So you can and should omit bias parameter in linear transforms right before the
128128
batch normalization.</p>
129-
<p>Batch normalization also makes the back propagation invariant to the scale of the weights.
130-
And empirically it improves generalization, so it has regularization effects too.</p>
129+
<p>Batch normalization also makes the back propagation invariant to the scale of the weights
130+
and empirically it improves generalization, so it has regularization effects too.</p>
131131
<h2>Inference</h2>
132132
<p>We need to know $\mathbb{E}[x^{(k)}]$ and $Var[x^{(k)}]$ in order to
133133
perform the normalization.
134134
So during inference, you either need to go through the whole (or part of) dataset
135135
and find the mean and variance, or you can use an estimate calculated during training.
136136
The usual practice is to calculate an exponential moving average of
137137
mean and variance during the training phase and use that for inference.</p>
138-
<p>Here&rsquo;s <a href="https://nn.labml.ai/normalization/layer_norm/mnist.html">the training code</a> and a notebook for training
139-
a CNN classifier that use batch normalization for MNIST dataset.</p>
138+
<p>Here&rsquo;s <a href="mnist.html">the training code</a> and a notebook for training
139+
a CNN classifier that uses batch normalization for MNIST dataset.</p>
140140
<p><a href="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/normalization/batch_norm/mnist.ipynb"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
141141
<a href="https://web.lab-ml.com/run?uuid=011254fe647011ebbb8e0242ac1c0002"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a></p>
142142
</div>

0 commit comments

Comments
 (0)