X
(https://swayam.gov.in) (https://swayam.gov.in/nc_details/NPTEL)
[email protected]
NPTEL (https://swayam.gov.in/explorer?ncCode=NPTEL) » Introduction to Machine Learning (course)
Click to register
for Certification
exam
Week 3 : Assignment 3
(https://examform.nptel.ac.in/2025_01/exam_form/dashboard)
The due date for submitting this assignment has passed.
If already Due on 2025-02-12, 23:59 IST.
registered, click
to check your Assignment submitted on 2025-02-07, 09:49 IST
payment status
1) Which of the following statement(s) about decision boundaries and discriminant 1 point
functions of classifiers is/are true?
Course In a binary classification problem, all points x on the decision boundary satisfy
outline δ 1 (x) = δ 2 (x) .
About In a three-class classification problem, all points on the decision boundary satisfy
NPTEL () δ 1 (x) = δ 2 (x) = δ 3 (x).
How does an In a three-class classification problem, all points on the decision boundary satisfy at least
NPTEL one of δ1 (x) = δ 2 (x), δ 2 (x) = δ 3 (x)orδ 3 (x) = δ 1 (x).
online If x does not lie on the decision boundary then all points lying in a sufficiently small
course neighbourhood around x belong to the same class.
work? ()
Yes, the answer is correct.
Score: 1
Week 0 () Accepted Answers:
In a binary classification problem, all points x on the decision boundary satisfy
Week 1 () .
δ 1 (x) = δ 2 (x)
In a three-class classification problem, all points on the decision boundary satisfy at least one
Week 2 () of δ1 (x) = δ 2 (x), δ 2 (x) = δ 3 (x)orδ 3 (x) = δ 1 (x).
If x does not lie on the decision boundary then all points lying in a sufficiently small
Week 3 () neighbourhood around x belong to the same class.
Linear
2) You train an LDA classifier on a dataset with 2 classes. The decision boundary is 1 point
Classification
significantly different from the one obtained by logistic regression. What could be the reason?
(unit?
unit=42&lesso The underlying data distribution is Gaussian
n=43)
The two classes have equal covariance matrices
Logistic
The underlying data distribution is not Gaussian
Regression
(unit? The two classes have unequal covariance matrices
unit=42&lesso Partially Correct.
n=44) Score: 0.5
Accepted Answers:
Linear
The underlying data distribution is not Gaussian
Discriminant
The two classes have unequal covariance matrices
Analysis - I -
Introduction 3) The following table gives the binary ground truth labels yi for four input points x i 1 point
(unit?
(not given). We have a logistic regression model with some parameter values that computes the
unit=42&lesso
probability p 1 (x i) that the label is 1. Compute the likelihood of observing the data given these
n=45)
model parameters
Linear
Discriminant
Analysis - II
(unit?
unit=42&lesso 0.072
n=46) 0.144
Linear 0.288
Discriminant 0.002
Analysis - III -
Another view Yes, the answer is correct.
Score: 1
of LDA (unit?
unit=42&lesso Accepted Answers:
n=47) 0.288
Tutorial (unit?
4) Which of the following statement(s) about logistic regression is/are true? 1 point
unit=42&lesso
n=48) It learns a model for the probability distribution of the data points in each class.
Week 3 The output of a linear model is transformed to the range (0, 1) by a sigmoid function.
Feedback The parameters are learned by minimizing the mean-squared loss.
Form :
Introduction To
The parameters are learned by maximizing the log-likelihood.
Machine Yes, the answer is correct.
Learning (unit? Score: 1
unit=42&lesso Accepted Answers:
n=284) The output of a linear model is transformed to the range (0, 1) by a sigmoid function.
The parameters are learned by maximizing the log-likelihood.
Quiz: Week 3
: Assignment
3 5) Consider a modified form of logistic regression given below where k is a positive 1 point
(assessment? constant and β 0 and β 1 are parameters.
name=310) 1−p(x)
log = ( ) = β0 + β1 x
kp(x)
Week 4 ()
−β
e 0
Week 5 () ke
−β
0 +e
β
1
x
−β x
e 1
Week 6 () −β kβ x
e 0 +e 1
Week 7 () e
β
1
x
β β x
ke 0 +e 1
Week 8 () e
−β
1
x
β −β x
ke 0 +e 1
Week 9 () Yes, the answer is correct.
Score: 1
Week 10 () Accepted Answers:
−β x
e 1
β −β x
ke 0 +e 1
Week 11 ()
6) Consider a Bayesian classifier for a 5-class classification problem. The following 1 point
Week 12 () tables give the class-conditioned density fk (x) for class k ∈ {1, 2, . . . 5} at some point x in
the input space.
Text
Transcripts
()
Let π k denotes the prior probability of class k . Which of the following statement(s) about the
predicted label at x is/are true? (One or more choices may be correct.)
Download
Videos () The predicted label at x will always be class 4.
Books () If 2π i ≤ π i+1 ∀i ∈ {1, . . . 4} , the predicted class must be class 4
Problem If π i ≥
3
π i+1 ∀i ∈ {1, . . . 4} , the predicted class must be class 1
2
Solving
Session - The predicted label at x can never be class 5
Jan 2025 ()
No, the answer is incorrect.
Score: 0
Accepted Answers:
If 2π i ≤ π i+1 ∀i ∈ {1, . . . 4} , the predicted class must be class 4
If π i ≥
3
π i+1 ∀i ∈ {1, . . . 4} , the predicted class must be class 1
2
7) Which of the following statement(s) about a two-class LDA classification model 1 point
is/are true?
On the decision boundary, the prior probabilities corresponding to both classes must be
equal.
On the decision boundary, the posterior probabilities corresponding to both classes must
be equal.
On the decision boundary, class-conditioned probability densities corresponding to both
classes must be equal.
On the decision boundary, the class-conditioned probability densities corresponding to
both classes may or may not be equal.
Yes, the answer is correct.
Score: 1
Accepted Answers:
On the decision boundary, the posterior probabilities corresponding to both classes must be
equal.
On the decision boundary, the class-conditioned probability densities corresponding to both
classes may or may not be equal.
8) Consider the following two datasets and two LDA classifier models trained 1 point
respectively on these datasets.
Dataset A: 200 samples of class 0; 50 samples of class 1
Dataset B: 200 samples of class 0 (same as Dataset A); 100 samples of class 1 created by
repeating twice the class 1 samples from Dataset A
Let the classifier decision boundary learnt be of the form wT x + b = 0 where, w is the slope
and b is the intercept. Which of the given statement is true?
The learned decision boundary will be the same for both models.
The two models will have the same slope but different intercepts.
The two models will have different slopes but the same intercept.
The two models may have different slopes and different intercepts
Yes, the answer is correct.
Score: 1
Accepted Answers:
The two models will have the same slope but different intercepts.
9) Which of the following statement(s) about LDA is/are true? 1 point
It minimizes the inter-class variance relative to the intra-class variance.
It maximizes the inter-class variance relative to the intra-class variance.
Maximizing the Fisher information results in the same direction of the separating
hyperplane as the one obtained by equating the posterior probabilities of classes.
Maximizing the Fisher information results in a different direction of the separating
hyperplane from the one obtained by equating the posterior probabilities of classes.
Yes, the answer is correct.
Score: 1
Accepted Answers:
It maximizes the inter-class variance relative to the intra-class variance.
Maximizing the Fisher information results in the same direction of the separating hyperplane
as the one obtained by equating the posterior probabilities of classes.
10) Which of the following statement(s) regarding logistic regression and LDA is/are true1 point
for a binary classification problem?
For any classification dataset, both algorithms learn the same decision boundary.
Adding a few outliers to the dataset is likely to cause a larger change in the decision
boundary of LDA compared to that of logistic regression.
Adding a few outliers to the dataset is likely to cause a similar change in the decision
boundaries of both classifiers.
If the intra-class distributions deviate significantly from the Gaussian distribution, logistic
regression is likely to perform better than LDA.
Yes, the answer is correct.
Score: 1
Accepted Answers:
Adding a few outliers to the dataset is likely to cause a larger change in the decision boundary
of LDA compared to that of logistic regression.
If the intra-class distributions deviate significantly from the Gaussian distribution, logistic
regression is likely to perform better than LDA.