School: SET Faculty of Engineering & Technology
Program: B-TECH
Branch: CSE-AI/ML Semester:
1 Course Code CSA032
2 Course Title Deep Learning and its Applications
3 Credits 4
4 Contact Hours 3
Course Status CORE
5 Course Objective This course aims to present the mathematical, statistical and
computational challenges of building stable representations for
high-dimensional data, such as images, text and data. We will
delve into selected topics of Deep Learning, discussing recent
models from both supervised and unsupervised learning. Special
emphasis will be on convolutional architectures, invariance
learning, unsupervised learning and non-convex
optimization.To understand and demonstrate how to solve
general learning from a large series of data using computer
based deep learning algorithms
6 Course Outcomes On successful completion of this module students will be able
(CO’s)
to:
1. Recall Neural Networks relate it with Deep Learning concepts
to solve real life applications
2. Compare and classify Regularization approaches for Deep
Learning.
3. Build Convolutional Neural Networks models for image
analysis and its applicability in societal problem solving.
4. Examine the Sequence models and analyse the relationships
among them.
5. Assess the different Deep learning models based on their design
processes.
6. Predict the behaviour of Deep learning models and apply them
to solve real life applications.
7 Course Description This course starts with introduction to Deep Learning and further
build, train, and deploy real world applications such as object
recognition and Computer Vision, image and video processing, text
analytics, Natural Language Processing, recommender systems, and
other types of classifiers.
8 Syllabus Outline CO Mapping
Unit 1 Deep Feed forward Networks
A Recall Neural networks, Deep learning and its CO1, CO6
Practical aspects for real life applications ,Introduction
to Simple Deep Neural Networks, Platform for Deep
Learning, Deep Learning Software Libraries
B Introduction to Deep Feed Forward
Networks ,Learning XOR, Gradient-Based Learning,
CO1
Activation Functions, ReLU, Softmax, Sigmoid ,
Error Functions
C Architecture Design- Hidden Units Back-Propagation
CO1
and Other Differentiation Algorithms
Unit 2 Regularization for Deep Learning
A Parameter Norm Penalties, Norm Penalties as
Constrained Optimization, Regularization and Under-
Constrained Problems, Dataset Augmentation, Noise
Robustness, Semi-Supervised Learning, Multitask
CO2
Learning, Early Stopping, Parameter Tying and
Parameter Sharing, Bagging , Drop Out, Difficulty of
training deep neural networks, Greedy layer wise
training, Adversarial Training
B How Learning Differs from Pure Optimization,
Challenges in Neural Network Optimization,
Basic Algorithms: Stochastic Gradient Descent,
Momentum, Nesterov Momentum CO2
Parameter Initialization Strategies
Algorithms with Adaptive Learning Rates, AdaGrad.
RMSProp. Adam
C Introduction to Autoencoder, Undercomplete
Autoencoder, Regularized Autoencoders,
Representational Power, Layer Size and Depth. CO2
Stochastic Encoders and Decoders, Applications of
Encoder Decoder models
Unit 3 Convolutional Neural Networks
A Why CNN?, Its role, significance and applicability in
societal problem solving , The Convolution Operation, CO1, CO3,
Motivation, Pooling, The Neuroscientific Basis for CO6
Convolutional Networks
B Prior probability distribution, Convolution and
Pooling as an Infinitely Strong Prior, Variants of the
Basic Convolution Function, Structured Outputs, Data CO1, CO3
types with different dimensionalities and number of
channel
C Efficient Convolution Algorithms, Random or CO1, CO3,
Unsupervised Features of CNN , Normalization,
Applications of CNN in Computer Vision – ImageNet CO6
Unit 4 Sequence Modeling: Recurrent Neural
Networks
A Sequence Learning Problems , Recurrent Neural
Network and its significance in real world, RNN
CO4, CO6
model, Backpropagation through time ,Bidirectional
RNNs
B Different types of RNNs, Gated Recurrent Unit CO4
(GRU)
Recursive Neural Networks , The Challenge of Long-
Term Dependencies
C Introduction of Long Short Term Memory Neural
Networks, Learning Algorithm of LSTM/ RNN CO4
Bidirectional LSTMs
Unit 5 Deep Networks and design process
A Introduction to Generative Adversarial Networks ,
CO5,CO6
Generative Adversarial Networks – Architecture,
B DCGAN, GAN hack, Applications of Generative
CO5,CO6
Adversarial Networks
C Practical design process for deep learning techniques
based on real world problems:
Performance Metrics , Default Baseline Models, CO5,CO6
Determining Whether to Gather More Data, Selecting
Hyperparameters, Debugging Strategies
Mode of Theory
examination
Weightage CA MTE ETE
Distribution 30% 20% 50%
Text Books 1. Deep Learning, by Goodfellow I., Bengio Y. & Courville A. (2016)
2. Visualizing and Understanding Convolutional Networks, by Matt
Zeiler, Rob Fergus
3. TensorFlow: a system for large-scale machine learning, by Martín
A., Paul B., Jianmin C., Zhifeng C., Andy D. et al. (2019)
Reference Books 1. Deep learning in neural networks, by JuergenSchmidhuber
(2015)
2. https://cs230.stanford.edu/syllabus/
3. https://towardsdatascience.com/september-edition-machine-learning-
case-studies-a3a61dc94f23
4. Deep Learning: A Practitioner's Approach by Josh Patterson,
Oreilly.
Online Materials