DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
IV B.Tech (CSE) I- SEMESTER
COURSE : DEEP LEARING
Maximum Marks
Course Code Category Hours / Week Credits CIA SEE Total
L T P C CIE AAT TOT
Core 2 1 0 3 25 5 30 70 100
Classes : 42Hrs Tutorial: Nil Practical Classes: Nil Total Classes: 42Hrs
Course Objectives:
Demonstrate the major technology trends driving Deep Learning
Build, train, and apply fully connected deep neural networks
Implement efficient (vectorized) neural networks
Analyse the key parameters and hyper parameters in a neural network's architecture
Course Outcomes:
After completion of the course, students will be able to
Demonstrate the mathematical foundation of neural network Describe the machine learning basics
Differentiate architecture of deep neural network Build a convolutional neural network
Build and train RNN and LSTMs
UNIT-I Lecture 8Hrs
Linear Algebra: Scalars, Vectors, Matrices and Tensors, Matrix operations, types of matrices, Norms, Eigen decomposition, Singular Value Decomposition, Principal Components Analysis.
Probability and Information Theory: Random Variables, Probability Distributions, Marginal
Theory. Numerical Computation: Overflow and Underflow, Gradient-Based Optimization, Constrained Optimization, Linear Least Squares.
UNIT-II Lecture 9Hrs
Machine Learning: Basics and Under fitting, Hyper parameters and Validation Sets, Estimators, Bias and Variance, Maximum Likelihood, Bayesian Statistics, Supervised and Unsupervised
Learning, Stochastic Gradient Descent, Challenges Motivating Deep Learning. Deep Feed forward Networks: Learning XOR, Gradient-Based Learning, Hidden Units, Architecture Design,
Back-Propagation and other Differentiation Algorithms.
UNIT-III Lecture 8Hrs
Regularization for Deep Learning: Parameter Norm Penalties, Norm Penalties as Constrained Optimization, Regularization and Under-Constrained Problems, Dataset Augmentation,
Noise Robustness, Semi-Supervised Learning, Multi-Task Learning, Early Stopping, Parameter Tying and Parameter Sharing, Sparse Representations, Bagging and Other Ensemble Methods,
Dropout, Adversarial Training, Tangent Distance, Tangent Prop and Manifold Tangent Classifier. Optimization for Training Deep Models: Pure Optimization, Challenges in Neural Network
Optimization, Basic Algorithms, Parameter Initialization Strategies, Algorithms with Adaptive Learning Rates, Approximate Second-Order Methods, Optimization Strategies and Meta-Algorithms.
UNIT-IV Lecture 8Hrs
Convolutional Networks: The Convolution Operation, Pooling, Convolution, Basic Convolution Functions, Structured Outputs, Data Types, Efficient Convolution Algorithms, Random or
Unsupervised Features, Basis for Convolutional Networks.
UNIT-V Lecture 8Hrs
Sequence Modeling: Recurrent and Recursive Nets: Unfolding Computational Graphs, Recurrent Neural Networks, Bidirectional RNNs, Encoder-Decoder Sequence-to-Sequence Architectures,
Deep Recurrent Networks, Recursive Neural Networks, Echo State Networks, LSTM, Gated RNNs, Optimization for Long-Term Dependencies, Auto encoders, Deep Generative Models.
Text books:
1. Ian Good fellow, Yoshua Bengio, Aaron Courville, "Deep Learning", MIT Press,2016
2. Josh Patterson and Adam Gibson, "Deep Learning: A Practitioner's approach", O'Reilly Media,First Edition,2017.
Reference Books:
1. Fundamentals of Deep Learning, Designing next-generation machine intelligence
algorithms, Nikhil Buduma,O'Reilly.Shroff publishers,2019.
2. Deep learning Cook Book, Practical recipes to get started Quickly, Douwe
Osinga, O'Reilly,shroff publishers,2019.
Online Learning Resources:
1. https://keras.io/datasets/
2. http://deeplearning.net/tutorial/deeplearning.pdf
3. https://arxiv.org/pdf/1404.7828v4.pdf
4. https://www.cse.iitm.ac.in/~miteshk/CS7015.html
5. https://www.deeplearningbook.org
6. https://nptel.ac.in/courses/106105215