Artificial Intelligence & Machine Learning Lab Manual (R22a6684)
Artificial Intelligence & Machine Learning Lab Manual (R22a6684)
ARTIFICIAL INTELLIGENCE
AND
MACHINE LEARNING
[R22A6684]
LABORATORY MANUAL
B. TECH CSE
(IIIYEAR–IISEM)
R22 REGULATION
(2024-25)
Name:
Rollno: Branch:
Section:
Year: Sem:
Mission
To achieve and impart holistic technical education using the best of infrastructure,
outstanding technical and teaching expertise to establish the students into competent
and confident engineers.
Evolving the center of excellence through creative and innovative teaching learning
practicesforpromotingacademicachievementtoproduceinternationallyacceptedcompetiti
veand world class professionals.
PROGRAMME EDUCATIONAL OBJECTIVES (PEOs)
PEO1–ANALYTICALSKILLS
To facilitate the graduates with the ability to visualize, gather information, articulate, analyze,
solve complex problems, and make decisions. These are essential to address the challenges of
complex and computation intensive problems increasing their productivity.
PEO2–TECHNICALSKILLS
Tofacilitatethegraduateswiththetechnicalskillsthatpreparethemforimmediateemploymentandpurs
ue certification providing a deeper understanding of the technology in advanced areas of
computer science and related fields, thus encouraging pursuing higher education and research
based on their interest.
PEO3–SOFTSKILLS
To facilitate the graduates with the soft skills that include fulfilling the mission, setting goals,
showing self confidence by communicating effectively, having a positive attitude, get
involved in team-work, being a leader, managing their career and their life.
PEO4–PROFESSIONALETHICS
To facilitate the graduates with the knowledge of professional and ethical responsibilities by
paying attention to grooming, being conservative with style, following dress codes, safety
codes, and adapting them to technological advancements.
1. FundamentalsandcriticalknowledgeoftheComputerSystem:-
AbletoUnderstandtheworkingprinciples of the computer System and its components, Apply
the knowledge to build, asses, and analyze the software and hardware aspects of it.
3. Applications of Computing Domain & Research: Able to use the professional, managerial,
interdisciplinary skill set, and domain specific tools in development processes, identify their
search gaps, and provide innovative solutions to them.
PROGRAM OUTCOMES (POs)
Engineering Graduates should possess the following:
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities
with an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to
the professional engineering practice.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
9. Individual and team work: Function effectively as an individual, and as member or leader in
diverse teams, and in multidisciplinary settings.
11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to engage
in independent and life-long learning in the broadest context of technological change.
ARTIFICIAL AND MACHINE
LEARNING LABORATORY
LAB OBJECTIVES:
1. To provide student with an academic environment aware of various AI Algorithms.
2. To train Students with python programming as to comprehend, analyze, design and create AI
platforms and solutions for the real life problems.
3. Learn usage of Libraries for Machine Learning in Python
4. Demonstrate Dimensionality reduction methods
5. Describe appropriate supervised/Unsupervised learning algorithms for a given problem.
LAB OUTCOMES:
Upon completion of the course, students will be able to
1. Apply various AI search algorithms (uninformed, informed, heuristic, constraint satisfaction)
2. Understand the fundamentals of knowledge representation, inference.
3. Illustrate the applications of Python Machine Learning Libraries.
4. Apply Dimensionality reduction methods for Machine Learning Tasks.
5. Design and analyze various supervised/unsupervised learning mechanisms.
About lab:
Python is a general purpose, high-level programming language; other high Level
languages you might have heard of C++,PHP, Java and Python. Virtually all
Modern programming languages make us of an Integrated Development Environment(IDE),
which allows the creation, editing, testing, and saving of programs and modules .In Python, the
IDE is called IDLE(like many items in the language, this is a reference to the British comedy
group Monty Python, and in this case, one of its members, Eric Idle).
Many modern languages use both processes. They are first compiled into a lower level
language, called byte code, and then interpreted by a program called a virtual machine. Python uses
both processes, but because of the way programmers interact with it, it is usually considered an
interpreted language Practical aspects are the key to understanding and conceptual visualization of
theoretical aspects covered in the laboratory.
General laboratory instructions
1. Students are advised to come to the laboratory at least 5minutes before(to the starting time),
those who come after 5minutes will not be allowed into the lab.
2. Plan your task properly much before to the commencement, come prepared to the lab with the
synopsis /program / experiment details.
3. Student should enter into the laboratory with:
a. Laboratory observation notes with all the details (Problem statement, Aim, Algorithm,
Procedure, Program, Expected Output, etc.,)filled in for the lab session.
b. Laboratory Record updated up to the last session experiments and other utensils (if any) needed in
the lab.
c. Proper Dress code and Identity card.
4. Sign in the laboratory login register, write the TIME-IN , and occupy the computer
system allotted to you by the faculty.
5. Execute your task in the laboratory, and record the results/ output in the lab observation
notebook, and get certified by the concerned faculty.
6. All the students should be polite and cooperative with the laboratory staff, must
Maintain the discipline and decency in the laboratory.
7. Computer labs are established with sophisticated and high end branded systems, which
should be utilized properly.
8. Students/ Faculty must keep their mobile phones in SWITCHED OFF mode during the lab
sessions. Misuse of the equipment, misbehaviors with the staff and systems etc., will attract severe
punishment.
9. Students must take the permission of the faculty in case of any urgency to go out; if anybody
found loitering outside the lab/ class without permission
duringworkinghourswillbetreatedseriouslyandpunishedappropriately.
10. Students should LOG OFF/ SHUT DOWN the computer system before he/she leaves the lab
after completing the task (experiment) in all aspects. He/ she must ensure the system/ seat is kept
properly.
INDEX
Program:
graph={
'5':['3','7'],
'3':['2','4'],
'7': ['8'],
'2':[],
'4': ['8'],
'8':[]
}
#Driver Code
print("Following is the Breadth-First Search")
bfs(visited,graph,'5') #functioncalling
Output:
Program:
graph={
'5':['3','7'],
'3':['2','4'],
'7': ['8'],
'2':[],
'4': ['8'],
'8':[]
}
visited = set()#Set to keep track of visited nodes of graph.
Def dfs (visited,graph,node): #function for dfs
if node not in visited:
print(node)
visited.add(node)
for neighbour in graph[node]:
dfs(visited,graph,neighbour)
#Driver Code
dfs(visited,graph,'5')
Output:
Viva Questions:
1. Differences between Informed and Uninformed Search.
2. What are the Properties of Search Algorithms.
3. What is Breadth-First-Search.
FacultySignature
DEPT OF CSE Page 3
Artificial Intelligence and Machine Learning Lab 2024-2025
WEEK-2
Program:
Import random
cities=list(range(len(tsp)))s
olution=[]
for i in range(len(tsp)):
solution.append(randomCity)
cities.remove(randomCity)
return solution
def routeLength(tsp,solution):
routeLength=0
for i in range(len(solution)):
routeLength+=tsp[solution[i-1]] [solution[i]]
return routeLength
def getNeighbours(solution):
neighbours=[]
for i in range(len(solution)):
neighbour = solution.copy()
neighbour[i] = solution[j]
neighbour[j] =solution[i]
neighbours.append(neighbour)r
eturn neighbours
bestRouteLength=routeLength(tsp,neighbours[0])
bestNeighbour=neighbours[0]
return bestNeighbour,bestRouteLength
def hillClimbing(tsp):
current RouteLength=routeLength(tsp,currentSolution)
neighbours=getNeighbours(currentSolution)
def main():
tsp=[
[0,400,500,300],
[400,0,300,500],
[500,300,0,400],
[300, 500,400,0]
]
Print (hillClimbing(tsp))
if name
=="main":main()
Viva questions
1. What is hill climbing in artificial intelligence?
2. What are the different types of hill-climbing algorithms?
3. What is the problem of local maxima in hill climbing?
FacultySignature
Program:
From collections import deque
Class Graph:
def init (self, adjac_lis):
self.adjac_lis=adjac_lis
def get_neighbors(self,v):
return self.adjac_lis[v]
#This is heuristic function which is having equal values for all nodes def
h (self,n):
H ={
'A':1,
'B':1,
'C':1,
'D':1
}
return H[n]
if n== None:
print('Path does not exist!')
return None
econst_path.reverse()
if m in closed_lst:
closed_lst.remove(m)
open_lst.add(m)
Output:
Viva Questions:
1. What is Best First Search?
2. Explain Heuristic Function?
3. Explain A*Search.
.
WEEK 4
Write a program to implement Tic-Tac-Toe game
Aim:
Write a program to implement Tic-Tac-Toe game.
Program:
Import os
import
time
board=['','','','','','','','','','']
player =1
########win Flags##########
Win =1
Draw =-1
Running=0
Stop =
1#########################
##
Game=Runnin
gMark='X'
elif(Game==Win):
player-=1
if(player%2!=0
):
print("Player1Won"
)
else:
print("Player2Won")
Output:
WEEK 5
Write a program to implement Water Jug Problem
Aim:
Program:
#jug1andjug2containthevalue
#for max capacity in respective jugs
#and aim is the amount of water to be
measured.
jug1,jug2,aim=4,3,2
#Initialize dictionary
with #default value as
false.
visited=default dict (lambda:False)
return (waterJugSolver(0,amt2)or
waterJugSolver(amt1,0)orwaterJugSolver(jug1,amt2)orwaterJugSolver(amt1,j
ug2)orwaterJugSolver(amt1 + min(amt2, (jug1-amt1)),amt2-min(amt2,(jug1-
amt1)))or waterJugSolver(amt1 - min(amt1, (jug2-amt2)),amt2+
min(amt1,(jug2-amt2))))
print("Steps:")
Output:
FacultySignature
Week 6
Write a python program to import and export the data using pandas library
1. Manual Function
Go to Google Page and find www.kaggle.com.Select Datasets and find Titanic dataset, then
download train.csvfile and save it to desktop.
1. Read a CSV file
import pandas as pd
url='C:/Users/MRCET1/Desktop/train.csv'
dataframe=pd.read_csv(url)dataframe.head(5)
2. Write a CSVfile
Import pandas as pd
marks_data=pd.DataFrame({'ID':{0:23,1:43,2:12,3:13,4:67,5:89},'NAME':{0:'Ram',1
:'Deep',2:'Yash',3:'Arjun',4:'Aditya',5:'Divya'},'Marks':{0:89,1:92,2:45,3:78,4:56,5:76},'Grade':{0:'
b',1:'a',2:'f',3:'c',4:'e',5:'c'}})
filename='C:/Users/MRCET1/Desktop/Marksdata.
xlsx'marks_data.to_excel(filename)
print('Data frame written to Excel')
importpandas as pd
marks_data=pd.DataFrame({'ID':{0:23,1:43,2:12,3:13,4:67,5:89},'NAME':{0:'Ram',1
:'Deep',2:'Yash',3:'Arjun',4:'Aditya',5:'Divya'},'Marks':{0:89,1:92,2:45,3:78,4:56,5:76},'Grade':{
0
:'b',1:'a',2:'f',3:'c',4:'e',5:'c'}})
filename='C:/Users/MRCET1/Desktop/Marksdata.csv'marks_data.to_csv(filename)
Output:
FacultySignature
df.describe()#to display mean,variance, count and other details of data df.columns #to
column_names=df.columns
for column in column_names:
print(column+'-'+str(df[column].isnull().sum()))
df.Survived.value_counts()
DATAVISUALIZATION
df[['Pclass','Survived']].groupby('Pclass').mean().Survived.plot.bar(x='PClass',y='SurvivalProbability
',rot=0)
#From the results, we can say that,1stclass has high chance of surviving than the other two classes.
#Preprocess 'Name'
#Extarct title from name of the passenger and categorize them. #Drop the
column 'Name'
df['Title'] = df.Name.str.extract(' ([A-Za-z]+)\.',expand=False)
df=df.drop(columns='Name')
df.Title.value_counts().plot.bar(x='Title',y=0,rot=0)
#Combines ome of the classes and group all the rare classes into 'Others'.
df['Title'] = df['Title'].replace(['Dr', 'Rev', 'Col', 'Major', 'Countess', 'Sir', 'Jonkheer', 'Lady',
'Capt','Don'],'Others')
df['Title'] =df['Title'].replace('Ms','Miss')
df['Title']=df['Title'].replace('Mme','Mrs)df['Title']
=df['Title'].replace('Mlle','Miss)
df.Title.value_counts().sort_index().plot.bar(x='Titl
e',y='Passengercount')
df[['Title','Survived']].groupby('Title').mean().Survived.plot.bar(x='Title',y='SurvivalProbability
')
df['Embarked'].isnull().sum()
#There are two null values in the column 'Embarked'. Let's impute them using majority class. #The
majority class is'S'. Impute the unkonown values (NaN) using 'S'
df['Embarked'] = df['Embarked'].fillna(2)
df.head()
#Missingvalues-'Age'
#Let's find the columns that are useful to predict
thevalueofAge.corr_matrix=df[['Pclass','Sex','Age','SibSp','Parch','Fare']].corr()
plt.figure(figsize=(7,6))
sns.heatmap(data=corr_matrix,cmap='BrBG',annot=True,linewidths=0.2)
#Age is not correlated with 'Sex' and 'Fare'. So,we don't consider these two columns while imputing'
Sex'.
#'Pclass', 'SibSp' and 'Parch' are negatively correlated with 'Sex'.
#Let's fill Age with the median age of similar rows from 'Pclass', 'SibSp' and 'Parch'
.#If there are no similar rows, fill the age with the median age of total dataset.
NaN_indexes=df['Age'][df['Age'].isnull()].index
for i in NaN_indexes:
pred_age = df['Age'][((df.SibSp == df.iloc[i]["SibSp"]) & (df.Parch == df.iloc[i]["Parch"])
&(df.Pclass==df.iloc[i]["Pclass"]))].median()
ifnotnp.isnan(pred_age):
df['Age'].iloc[i] = pred_ageelse:
df['Age'].iloc[i] = df['Age'].median()
df.isnull().sum()
Output:
Viva Questions
2. What is Machine learning?
3. What is the main key difference between supervised and unsupervised machine
learning?
4. What are the Different Types of Machine Learning?
5. What is numpy in python?
FacultySignature
WEEK-7
a): Implement Dimensionality reduction using Principle ComponentAnalysis(PCA)method.
Now, were add the data and check out how the data looks
data=pd.read_csv("C:/Users/MRCET1/Desktop/stars.csv")
data.head(5)
Now, let us check the shape of the dataset. Before proceeding with the problem
statement, understanding the data set is very important.
data.shape
a= pd.DataFrame(data['Spectral Class'].value_counts())
plt.figure(figsize=(8,6))
sns.barplot(data=a,x='SpectralClass',y=a.index,palette='rainbow')plt.title("Star
SpectralClassAnalysis")
StarType Analysis
a =pd.DataFrame(data['Star type'].value_counts())
plt.pie(data=a,x='Startype',labels=a.index,autopct='%1.1f%%')
plt.title("PercentageDistributionofStarType")
m
Corrlation Analysis
matrix=data.corr()
mask=np.zeros_like(matrix,dtype=float)mask[np.triu
_indices_from(mask)]=Trueplt.figure(figsize=(11,6)
)
sns.heatmap(matrix,annot=True,cmap='viridis',annot_kws={'size':10},mask=mask)plt.title("
CorrelationAnalysis")
plt.show()
data['Color_Label']=label_encoder.fit_transform(data['Star color'])
data['Spectral_Class_Label']=label_encoder.fit_transform(data['Spec tralClass'])data.head()
print("Original Colours:")
print(data['Starcolor'].unique())print("Labels:")
print(data['Color_Label'].unique())
y=data["Spectral_Class_Label"].values
X=data.drop(labels=['Spectral_Class_Label','Starcolor','SpectralClass'],axis=1).values
explained_variance =
pca.explained_variance_ratio_explained_variance
First, we train the Classifier model by taking the two top features. Inbuilt Random Forest Classifier is
used. mm
from sklearn. Decomposition import PCA
pca2 = PCA(n_components=2)X_2=pca2.fit_transform(X)
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test =train_test_split(X_2,y,test_size=0.2,random_state=5)
From sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(max_depth=2,
random_state=0)classifier.fit(X_train,y_train)
#PredictingtheTestsetresultsy_pred =
classifier.predict(X_test)from
sklearn.metrics importaccuracy_score
print('Accuracy: '
,accuracy_score(y_test,y_pred))
Output:
Viva Questions
1. What is PCA?
2. What is Dimensionality Reduction?
3. What is CSV file?
4. Compare PCA and LDA
FacultySignature
Viva Questions
1. What is meant by data visualization in machine learning?
2. What are the various types of Data Visualization Approaches?
3. List some of the Data Visualization Libraries Available in Python
4. List some of Feature Selection Techniques in supervised learning.
Faculty Signature
# estimating coefficients
b = estimate_coef(x, y)
print("Estimated coefficients:\nb_0 = {} \
\nb_1 = {}".format(b[0], b[1]))
def main():
# observations / data
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])
Output:
Faculty Signature
x=np.arange(10).reshape(-1,1)
y=np.array([0,0, 0, 0, 1, 1,1, 1, 1, 1])
xy
model = LogisticRegression(solver='liblinear',
random_state=0)model.fit(x,y)
LogisticRegression(C=1.0,class_weight=None,dual=False,fit_intercept=True,intercept_scaling=1,
l1_ratio=None,max_iter=100,
multi_class='warn',n_jobs=None,penalty='l2',random_
state=0,solver='liblinear',tol=0.0001,verbose=0,warm_
start=False)
model=LogisticRegression(solver='liblinear',random_state=0).fit(x,y)model.classes
_model.intercept_model.coef_
model.predict_proba(x)#model.predict(x)
VivaQuestions
1. What ismeantbylinearregression?
2. Listsomeofthesupervisedlearningalgorithms
3. List thetypesoflinearregressionanddefineeach
VivaQuestions
1. Whatismeantbylogisticregression?
2. Listthetypesoflogisticregressionandexplain each
3. Listtheclassificationmodelinmachinelearning
4. WhataretheStepsinvolvedin LogisticRegression
Faculty Signature
# Function importing
Datasetdefimportdata():
balance_data =
pd.read_csv('https://archive.ics.uci.edu/ml/machine-
learning-'+'databases/balance-scale/balance-scale.data',
sep=',',header=None)
#Printingthedataswetshape
print ("Dataset Length: ",
len(balance_data))print("DatasetShape:",balance_dat
a.shape)
#Performingtrainingclf_entropy.fit(X_trai
n,y_train)returnclf_entropy
# Function to make
predictionsdefprediction(X_test,clf_object)
:
# Function to calculate
accuracydefcal_accuracy(y_test,y_pred):
print("Confusion Matrix:
",confusion_matrix(y_test,y_pred))
#BuildingPhasedata=importdata()
X, Y, X_train, X_test, y_train, y_test =
splitdataset(data)clf_gini=train_using_gini(X_train,X_test,y_train
)
clf_entropy=tarin_using_entropy(X_train,X_test,y_train)#Operationa
lPhase
print("ResultsUsingGiniIndex:")
#Predictionusinggini
y_pred_gini=prediction(X_test,clf_gini)cal_accura
cy(y_test,y_pred_gini)
print("ResultsUsingEntropy:")#Predictio
nusingentropy
y_pred_entropy = prediction(X_test,
clf_entropy)cal_accuracy(y_test,y_pred_entropy)
#Calling mainfunction
if name=="main":main()
VivaQuestions
1. WhatisDecisionTrees-ID3in machinelearning?
2. Whatismeantbyinformationgain?
3. Whatarethemajorissuesindecisiontreelearning?
4. Howdoesdecisiontreehelpindecisionmaking?
Faculty Signature
WEEK-10 a):ImplementationofNaïveBayesclassifieralgorithm
#Importingthedataset
dataset=pd.read_csv('titanic.csv')
X=dataset.iloc[:,[2,3]].valuesy=dataset.il
oc[:,-1].values
#SplittingthedatasetintotheTrainingsetand Test set
from sklearn.model_selectionimporttrain_test_split
X_train,X_test,y_train,y_test =train_test_split(X,y,test_size
=0.20,random_state=0)
#FeatureScaling
from sklearn.preprocessing importStandardScaler sc
=
StandardScaler()X_train=sc.fit_transform(X_train)X
_test
=sc.transform(X_test)
# Training the Naive Bayes model on theTraining
set from sklearn.naive_bayesimportGaussianNB
classifier =
GaussianNB()classifier.fit(X_train,y_tra
in)#PredictingtheTestsetresults
y_pred = classifier.predict(X_test)# Making
theConfusionMatrix
from sklearn.metrics import confusion_matrix,accuracy_score ac = accuracy_score(y_test,y_pred)cm=
confusion_matrix(y_test,y_pred
#SeparatingthedependentandindependentvariableX_train,X_test,y_tra
in,y_test=train_test_split(X,y,
test_size=0.3,random_state=0)
test.append(test_score)
scores[k] = [training_score,
test_score]ax=sns.stripplot(training)
ax.set(xlabel='valuesofk',ylabel='TrainingScore')plt.show()
ax=sns.stripplot(test)
ax.set(xlabel='valuesofk',ylabel='TestScore'
plt.show()
plt.scatter(K,training,color='k')plt.scatter(
K, test, color='g')plt.show()
VivaQuestions
1. Whatisnaiveinnaivebaye’sclassifier?
2. WriteanoteonSVM
3. GivetheformulaforBayes'Theorem
4. Listsomeoftheadvantagesanddisadvantagesofnaïvebayesclassifier
Faculty Signature
WEEK-11:BuildArtificialNeuralNetworkmodelwithbackpropagationonagivendataset
Let’s first understand the term neural networks. In a neural
network,whereneuronsarefedinputswhichthenneuronsconsidertheweightedsum
over them and pass it by an activation function and passes out
theoutputtonextneuron.
Import numpy as np
X=np.array(([2,9],[1,5],[3,6]),dtype=float)
y=np.array(([92], [86],[89]),dtype=float)
X = X/np.amax(X, axis=0)# maximum of X
arraylongitudinallyy=y/100
#SigmoidFunctiondefsigmoid(x):return1/(1+
np.exp(-x))
# Derivative of Sigmoid
Functiondefderivatives_sigmoid(x):
returnx*(1-x)
#Variable initialization
epoch = 5# Setting training iterations lr =
0.1#Settinglearningrate
inputlayer_neurons=2
# number of features in data
sethiddenlayer_neurons= 3
# number of hidden layers
neuronsoutput_neurons=1
#number
ofneuronsatoutputlayer#weightandbiasinitia
lization
wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons))bh=
np.random.uniform(size=(1,hiddenlayer_neurons))
wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))bout=
np.random.uniform(size=(1,output_neurons))
# draws a random range of numbersuniformlyofdim
x*yforiinrange(epoch):
#ForwardPropogationhinp1
=np.dot(X,wh)hinp=hinp1+bh
hlayer_act=sigmoid(hinp)
mm
outinp1+bout
output=sigmoid(outinp)
# BackpropagationEO = y-
output
outgrad =
derivatives_sigmoid(output)d_output=EO*out
grad
EH=d_output.dot(wout.T)
# how much hidden layer wts contributed to
errorhiddengrad =
derivatives_sigmoid(hlayer_act)d_hiddenlayer=
EH*hiddengrad
# dotproduct of nextlayererror and currentlayeropwout +=
hlayer_act.T.dot(d_output)*lr
wh+=X.T.dot(d_hiddenlayer)*lr
print("-----------Epoch-",i+1,"Starts ---------- ")
print("Input: \n" + str(X))print("Actual
Output: \n" + str(y))print("Predicted
Output:\n",output)
print("-----------Epoch-",i+1,"Ends ---------- \n")
print("Input: \n" + str(X))print("Actual
Output: \n" + str(y))print("Predicted
Output:\n",output)
VivaQuestions
1. DefineANN
2. WhatarethetypesofArtificialNeuralNetworks?
3. ListsomeoftheApplicationsofArtificialNeuralNetworks
4. Whatisbackpropagationinneuralnetworks?
WEEK-12:ImplementingK-meansClusteringAlgorithm.
# installations
#1.pipinstallscikit-learn
#2.pip installmatplotlib
# 3.pip install k-means-
constrained#4.pipinstallpandas
from
sklearn.datasetsimportmake_bl
obs
importmatplotlib.pyplotaspltfromk_means_
constrainedimport
KMeansConstrained import pandas as
pddf=pd.read_csv('student_clustering.csv')
X=df.iloc[:,:].values
km=KMeansConstrained(n_clusters=4,max_iter=500)y_means=km.fit
_predict(X)
plt.scatter(X[y_means==0, 0],X[y_means==0,1],
color='red')
plt.scatter(X[y_means == 1, 0], X[y_means == 1,
1],color='blue')
plt.scatter(X[y_means == 2, 0], X[y_means == 2,
1],color='green')
plt.scatter(X[y_means == 3, 0], X[y_means == 3,
1],color='yellow')plt.show()
VivaQuestions
1. WhatisKNNinmachinelearning?
2. Why do weneedaK-NNAlgorithm?
3. WhatisKernelMethod?
4. ListsomeofthemajorKernelFunctioninSupportVectorMachine
Faculty Signature
Exercise Programs