0% found this document useful (0 votes)
17 views

Regression Linear Simple

Regression Linear Simple

Uploaded by

divyansh.roorkee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Regression Linear Simple

Regression Linear Simple

Uploaded by

divyansh.roorkee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Regression

• Regression is a statistical and machine learning technique used for modelling the
relationship between a dependent variable (target) and one or more independent
variables (predictors or features).
• The primary goal of regression analysis is to understand how changes in the
independent variables are associated with changes in the dependent variable.
• In regression, we typically assume that there is a linear or nonlinear relationship
between the independent variables and the dependent variable.
• The resulting model can be used for various purposes, including prediction,
forecasting, and understanding the underlying relationships in the data.
• In regression, the output is continuous.
Regression
Linear Vs Non-Linear
Types of Regression

• Linear Regression: Linear regression is used when there is a linear relationship


between the independent and dependent variables.
• The simplest form is simple linear regression, which involves a single independent variable.
• Multiple linear regression involves multiple independent variables.
• Polynomial Regression: Polynomial regression is an extension of linear regression
that models the relationship between variables as an nth-degree polynomial. This
allows for capturing more complex, nonlinear relationships. This is used to represent
the non-linear relationship between the independent variable(s) and the dependent
variable.
• The goal is to find the best-fit line (in case of a single variable) or hyperplane (in case
of multiple variables) that minimizes the sum of squared differences between the
predicted and actual values.
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression

• Now , our objective is to minimise the value of the cost function.


• Unknown parameters in the equation of cost function are θ0 and θ1 .
• So, we need to find the value of θ0 and θ1 for which the value of the cost function is
minimum.
Simple Linear Regression
• Ordinary Least Square Method to find the value of the unknown parameters.
Simple Linear Regression
• Ordinary Least Square Method to find the value of the unknown parameters.
Simple Linear Regression
• Ordinary Least Square Method to find the value of the unknown parameters.
Simple Linear Regression
• Ordinary Least Square Method to find the value of the unknown parameters.
Simple Linear Regression

• We need to check that how loss function changes with small changes in the value of
parameters θ0 and θ1.
Simple Linear Regression
• Gradient descent algorithm to find the value of the unknown parameters.
Simple Linear Regression
• We choose any random values of θ0 and
θ1and then calculate the value of Loss A
B
function.
L(θ1)
• For understanding purpose, we assume that
θ0 =0 and calculate the value of Loss
function. C
• Now if we draw a graph between L(θ1) and θ1
θ1 then value of L(θ1) can be at point A, or it
can be at point B. It depends on the initially
chosen value of θ1. We also assume that
L(θ1) is minimum at point C.
• Then question arises how do we reach at
Point C from A or from B.
Simple Linear Regression
• We differentiate the loss function L with
respect to the parameter θ1i.e. we calculate A
B

L(θ1)

• If it comes out negative (downward slope), C


then the value of θ1 will increases. θ1
• If it comes out positive (upward slope) then
the value of θ1 will decreases.
• At what rate the value of θ1 will increase or
decrease that depends upon the learning
rate α.
Simple Linear Regression
• learning rate α is a hyperparameter whose value does not get updated by the
training of the model.
• Usually, the value of α is small (.01)
Simple Linear Regression
• If we consider the both parameters, θ0 and θ1 then this contour graph will be
generated.
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Simple Linear Regression
Gradient Descent for Simple Linear Regression

• In simple linear regression, the goal is to find the best-fitting line (a straight line)
that minimizes the sum of the squared differences between the predicted values
and the actual values of a dependent variable (Y) based on an independent variable
(X).
• Gradient descent is one of the optimization techniques used to find the parameters
(slope and intercept) of this line.
Gradient Descent for Simple Linear Regression

• Here are the mathematical steps for finding the parameters (slope and intercept) in
simple linear regression using gradient descent:
1. Initialize Parameters: Start by initializing the slope (m) and intercept (b) with some initial
values. These values can be set randomly or to some initial guess.
2. Define the Cost Function: The cost function represents how far off your predictions are from
the actual values. In simple linear regression, the cost function (often called the Mean Squared
Error or MSE) is defined as: J(m, b) = (1 / 2n) * Σ(i=1 to n) [(Yi - (m * Xi + b))^2]
Where:
•J(m, b) is the cost function,
•m is the slope (parameter to be updated).
•b is the intercept (parameter to be updated).
•n is the number of data points.
•Xi is the independent variable (feature) for the ith data point.
•Yi is the actual dependent variable (target) for the ith data point.
Gradient Descent for Simple Linear Regression

3. Gradient Descent Iteration:


•Calculate the gradient (partial derivatives) of the cost function with respect to the parameters m and b:
•∂J/∂m = (1/n) * Σ(i=1 to n) [(Yi - (m * Xi + b)) * (-Xi)]
•∂J/∂b = (1/n) * Σ(i=1 to n) [(Yi - (m * Xi + b)) * (-1)]
•Update the parameters m and b using the gradients and a learning rate (α):
•m = m - α * ∂J/∂m b = b - α * ∂J/∂b
•Repeat the above two steps (calculating gradients and updating parameters) for a specified number of
iterations or until convergence (when the change in cost becomes very small).
4. Convergence Check: Monitor the cost function's value during iterations. If it is decreasing and
converging to a minimum, the algorithm is working correctly. If not, you may need to adjust the
learning rate or other hyperparameters.
5. Final Parameters: Once the algorithm converges or reaches a stopping criterion, the final values
of m and b represent the best-fit line for your simple linear regression model.
6. Use the Model: You can now use the learned parameters (slope and intercept) to make
predictions for new data points by plugging them into the linear equation: Y = m * X + b.

You might also like