Autoencoders
Presented by:
2019220013
BALDE LANSANA( 兰撒那 )
Introduction
An autoencoder is a neural network that is trained to
attempt to copy its input to its output.
Internally, it has a hidden layer h that describes a code
used to represent the input.
The network may be viewed as consisting of two parts: an
encoder function h=f(x) and a decoder that produces a
reconstruction r=g(h).
What are autoencoders?
"Autoencoding" is a data compression algorithm where the
compression and decompression functions are:
data-specific,
lossy,
learned automatically from examples rather than
engineered by a human.
What are autoencoders?
Hyperparameters of Autoencoders:
There are 4 hyperparameters that we need to set before training an
autoencoder:
Application of autoencoders
Autoencoders are used for tasks that involve :
Features extraction
Image Coloring
Feature variation
Data Compression
Learning generative model for data
data denoising
dimensionality reduction for data
visualization.
Application of autoencoders
Image Coloring
Autoencoders are used for converting any black and white picture into a
colored image. Depending on what is in the picture, it is possible to tell
what the color should be.
Application of autoencoders
Dimensionality Reduction
The reconstructed image is the same as our input but with reduced
dimensions. It helps in providing the similar image with a reduced pixel
value.
Application of autoencoders
Denoising Image
The input seen by the autoencoder is not the raw input but a
stochastically corrupted version. A denoising autoencoder is thus
trained to reconstruct the original input from the noisy version.
Application of autoencoders
Watermark Removal
It is also used for removing watermarks from images or to remove
any object while filming a video or a movie.
Application of autoencoders
Feature variation
It extracts only the required features of an image and generates
the output by removing any noise or unnecessary interruption.
Architecture of Autoencoders
An Autoencoder consist of three layers:
1.Encoder
2.Code
3.Decoder
Types of Autoencoders
Convolution Autoencoders
Autoencoders in their traditional formulation does not take into account
the fact that a signal can be seen as a sum of other signals.
Convolutional Autoencoders use the convolution operator to exploit this
observation. They learn to encode the input in a set of simple signals and
then try to reconstruct the input from them, modify the geometry or the
reflectance of the image.
Types of Autoencoders
Sparse Autoencoders
Sparse autoencoders offer us an alternative method for
introducing an information bottleneck without requiring
a reduction in the number of nodes at our hidden
layers. Instead, we’ll construct our loss function such that
we penalize activations within a layer.
Types of Autoencoders
Deep Autoencoders
A deep autoencoder is composed of two, symmetrical deep-belief
networks-
Types of Autoencoders
Contractive Autoencoders
A contractive autoencoder is an unsupervised deep
learning technique that helps a neural network encode unlabeled
training data.
This is accomplished by constructing a loss term which penalizes
large derivatives of our hidden layer activations with respect to the
input training examples, essentially penalizing instances where a
small change in the input leads to a large change in the encoding
space.
How to train the AutoEncoder
・Starting with random
weights in the two networks
・They are trained by
minimizing the discrepancy
between the original data
and its reconstruction.
・ Gradients are obtained by
the chain rule to back-
propagate error from the
decoder network to encoder
network.
Comparing PCA to an autoencoder
The MNIST database holds 60,000 samples of images for training the system, and
another 10,000 for testing the system.
Comparing PCA to an autoencoder
possible research ideas
Combine an Auto-encoder with any other
machine learning algorithm:
Classification: LDA or SVM
Regression technique.
Use more than two data sets to compare the
performances of an Auto-Encoder and PCA
Find a data set of image and use an
autoencoder as a generator model
Use the Auto-encoders as data compressor
technique for IoT
References
[1] https://www.edureka.co/blog/autoencoders-tutorial/
[2] G. Hinton, R. R. Salakhutdinov, Science 313, 504-507 (2006)
[3] G. E. Hinton. A practical guide to training restricted Boltzmann
machines. Technical Report 2010-003, University of Toronto, 2010
[4] http://cl.naist.jp/~kevinduh/a/deep2014/
[5] L. Deng, "The MNIST Database of Handwritten Digit Images for Machine
Learning Research," in IEEE Signal Processing Magazine, vol. 29, no. 6, pp.
141-142, Nov. 2012.
[6] A. Kane and N. Shiri, "Selecting the Top-k Discriminative Features Using
Principal Component Analysis," 2016 IEEE 16th International Conference on
Data Mining Workshops (ICDMW), Barcelona, Spain, 2016, pp. 639-646.
[7] C. Hu, X. Hou and Y. Lu, "Improving the Architecture of an Autoencoder
for Dimension Reduction," 2014 IEEE 11th Intl Conf on Ubiquitous
Intelligence and Computing and 2014 IEEE
Thank You
Pattern Recognition