AI_XI_SEC3
AI_XI_SEC3
Foundation for AI
1
Syllabus for Section 3 : Foundation for AI [ 08 Marks ] -ARTI
History of AI : Alan Turing and cracking enigma , mark 1 machines , 1956-
the birth of the term AI , AI winter of 70’s , expert systems of 1980s, skipped
journey of present day AI, pattern recognition and Machine learning
Introduction to Linear Algebra and statistics for AI :
Basic matrix operations like matrix addition , subtraction , multiplication ,
transpose of matrix , identity matrix.---refer to any standard Maths text
book of Class XI,XII
Brief introduction to vectors , unit vector , normal vector , Euclidean
space.---refer to any standard Maths text book of Class XI,XII
Correlation , Regression, Introduction to Graphs ( Basic idea )
Probability distribution , frequency , mean , median and mode , variance
and standard deviation , Gaussian distribution.------refer to any
standard Maths text book of Class XI,XII
Distance function , Euclidean norm , distance between two points in 2D and
3D and extension of idea to n dimensions.
2
CONTENTS
Sl Topic Page
No No
1 AI-Basic Concept 4
2 History of AI 17
3 Turing Test 27
4 Distance function , Euclidean Norm , Distance 30
between two points in 2D , 3D and n dimension
5 Graphs 34
6 Correlation and Regression 35
3
What Is Artificial Intelligence?
Artificial Intelligence is currently one of the hottest buzzwords in tech and with good
reason. The last few years have seen several innovations and advancements that
have previously been solely in the realm of science fiction slowly transform into
reality.
Experts regard artificial intelligence as a factor of production, which has the
potential to introduce new sources of growth and change the way work is done
across industries. China and the United States are primed to benefit the most from
the coming AI boom, accounting for nearly 70% of the global impact.
Artificial Intelligence is a method of making a computer, a computer-controlled
robot, or a software think intelligently like the human mind. AI is accomplished
by studying the patterns of the human brain and by analyzing the cognitive
process. The outcome of these studies develops intelligent software and
systems.
4
operates within predefined boundaries and cannot generalize beyond their
specialized domain.
Strong AI (General AI)
Strong AI, also known as general AI, refers to AI systems that possess human-level
intelligence or even surpass human intelligence across a wide range of tasks.
Strong AI would be capable of understanding, reasoning, learning, and applying
knowledge to solve complex problems in a manner similar to human cognition.
However, the development of strong AI is still largely theoretical and has not been
achieved to date.
5
1. Narrow AI (or Weak AI): Specialized AI designed for specific tasks.
1. Purely Reactive
These machines do not have any memory or data to work with, specializing in just
one field of work. For example, in a chess game, the machine observes the moves
and makes the best possible decision to win.
2. Limited Memory
These machines collect previous data and continue adding it to their memory. They
have enough memory or experience to make proper decisions, but memory is
minimal. For example, this machine can suggest a restaurant based on the location
data that has been gathered.
3. Theory of Mind
This kind of AI can understand thoughts and emotions, as well as interact socially.
However, a machine based on this type is yet to be built.
4. Self-Aware
Self-aware machines are the future generation of these new technologies . They
will be intelligent, sentient, and conscious.
6
Machine Learning:
Machine Learning focuses on the development of algorithms and models that
enable computers to learn from data and make predictions or decisions without
explicit programming. Here are key characteristics of machine learning:
Deep Learning:
Deep Learning is a subset of machine learning that focuses on training artificial
neural networks inspired by the human brain's structure and functioning. Here are
key characteristics of deep learning:
7
How Does Artificial Intelligence Work?
Put simply, AI systems work by merging large with intelligent, iterative processing
algorithms. This combination allows AI to learn from patterns and features in the
analyzed data. Each time an Artificial Intelligence system performs a round of data
processing, it tests and measures its performance and uses the results to develop
additional expertise.
Ways of Implementing AI
Machine Learning
8
Artificial Intelligence emphasizes three cognitive skills of learning, reasoning, and
self-correction, skills that the human brain possess to one degree or another. We
define these in the context of AI as:
Learning: The acquisition of information and the rules needed to use that
information.
Reasoning: Using the information rules to reach definite or approximate
conclusions.
Self-Correction: The process of continually fine-tuning AI algorithms and
ensuring that they offer the most accurate results they can.
However, researchers and programmers have extended and elaborated the goals
of AI to the following:
1. Logical Reasoning
2. Knowledge Representation
5. Perception
Use computers to interact with the world through sight, hearing, touch,
and smell.
9
6. Emergent Intelligence
Intelligence that is not explicitly programmed, but emerges from the rest
of the specific AI features. The vision for this goal is to have machines
exhibit emotional intelligence and moral reasoning.
Some of the tasks performed by AI-enabled devices include:
Speech recognition
Object detection
Solve problems and learn from the given data
Plan an approach for future tests to be done
Pros
Cons
It’s costly to implement
It can’t duplicate human creativity
It will definitely replace some jobs, leading to unemployment
People can become overly reliant on it
10
AI techniques, including computer vision, enable the analysis and interpretation of
images and videos. This finds application in facial recognition, object detection and
tracking, content moderation, medical imaging, and autonomous vehicles.
Recommendation Systems
AI-powered recommendation systems are used in e-commerce, streaming
platforms, and social media to personalize user experiences. They analyze user
preferences, behavior, and historical data to suggest relevant products, movies,
music, or content.
Financial Services
AI is extensively used in the finance industry for fraud detection, algorithmic trading,
credit scoring, and risk assessment. Machine learning models can analyze vast
amounts of financial data to identify patterns and make predictions.
Healthcare
AI applications in healthcare include disease diagnosis, medical imaging analysis,
drug discovery, personalized medicine, and patient monitoring. AI can assist in
identifying patterns in medical data and provide insights for better diagnosis and
treatment.
Gaming
AI algorithms are employed in gaming for creating realistic virtual characters,
opponent behavior, and intelligent decision-making. AI is also used to optimize
game graphics, physics simulations, and game testing.
11
Smart Homes and IoT
AI enables the development of smart home systems that can automate tasks,
control devices, and learn from user preferences. AI can enhance the functionality
and efficiency of Internet of Things (IoT) devices and networks.
Cybersecurity
AI helps in detecting and preventing cyber threats by analyzing network traffic,
identifying anomalies, and predicting potential attacks. It can enhance the security
of systems and data through advanced threat detection and response
mechanisms.
These are just a few examples of how AI is applied in various fields. The potential of
AI is vast, and its applications continue to expand as technology advances.
12
Google Maps utilizes AI algorithms to provide real-time navigation, traffic updates,
and personalized recommendations. It analyzes vast amounts of data, including
historical traffic patterns and user input, to suggest the fastest routes, estimate
arrival times, and even predict traffic congestion.
Smart Assistants
Smart assistants like Amazon's Alexa, Apple's Siri, and Google Assistant employ AI
technologies to interpret voice commands, answer questions, and perform tasks.
These assistants use natural language processing and machine learning
algorithms to understand user intent, retrieve relevant information, and carry out
requested actions.
Snapchat Filters
Self-Driving Cars
13
Wearables
MuZero
MuZero is an AI algorithm developed by DeepMind that combines reinforcement
learning and deep neural networks. It has achieved remarkable success in playing
complex board games like chess, Go, and shogi at a superhuman level. MuZero
learns and improves its strategies through self-play and planning.
These examples demonstrate the wide-ranging applications of AI, showcasing its
potential to enhance our lives, improve efficiency, and drive innovation across
various industries.
FAQs
1. Where is AI used?
Artificial intelligence is frequently utilized to present individuals with personalized
suggestions based on their prior searches and purchases and other online
behavior. AI is extremely crucial in commerce, such as product optimization,
inventory planning, and logistics. Machine learning, cybersecurity, customer
relationship management, internet searches, and personal assistants are some of
the most common applications of AI. Voice assistants, picture recognition for face
unlocking in cell phones, and ML-based financial fraud detection are all examples
of AI software that is now in use.
14
intelligence in machines that are programmed to think like humans and mimic
their actions.
15
3. Artificial Superintelligence (ASI): Hypothetical AI surpassing human
intelligence in all aspects, potentially capable of solving complex
problems and making advancements beyond human comprehension.
8. Is AI dangerous?
Aside from planning for a future with super-intelligent computers, artificial
intelligence in its current state might already offer problems.
16
History of AI
Artificial intelligence, or at least the modern concept of it, has been with us for
several decades, but only in the recent past has AI captured the collective psyche
of everyday business and society.
The introduction of AI in the 1950s very much paralleled the beginnings of the
Atomic Age. Though their evolutionary paths have differed, both technologies are
viewed as posing an existential threat to humanity.
Perceptions about the darker side of AI aside, artificial intelligence tools and
technologies, since the advent of the Turing test in 1950 have made incredible
strides -- despite the intermittent roller-coaster rides mainly due to funding fits and
starts for AI research. Many of these breakthrough advancements have flown under
the radar, visible mostly to academic, government and scientific research circles
until the past decade or so, when AI was practically applied to the wants and needs
of the masses. AI products such as Apple's Siri and Amazon's Alexa, online shopping,
17
social media feeds and self-driving cars have forever altered the lifestyles of
consumers and operations of businesses.
Through the decades, some of the more notable developments include the
following:
Eliza, the chatbot with cognitive capabilities, and Shakey, the first mobile
intelligent robot, in the 1960s.
1950
1951
Marvin Minsky and Dean Edmonds developed the first artificial neural network
(ANN) called SNARC using 3,000 vacuum tubes to simulate a network of 40 neurons.
1952
1956
John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon coined
the term artificial intelligence in a proposal for a workshop widely recognized as a
founding event in the AI field.
1958
Frank Rosenblatt developed the perceptron, an early ANN that could learn from
data and became the foundation for modern neural networks.
John McCarthy developed the programming language Lisp, which was quickly
adopted by the AI industry and gained enormous popularity among developers.
18
1959
Arthur Samuel coined the term machine learning in a seminal paper explaining
that the computer could be programmed to outplay its programmer.
1964
1965
1966
Stanford Research Institute developed Shakey, the world's first mobile intelligent
robot that combined AI, computer vision, navigation and NLP. It's the grandfather of
self-driving cars and drones.
1968
Terry Winograd created SHRDLU, the first multimodal AI that could manipulate and
reason out a world of blocks according to instructions from a user.
1969
19
Marvin Minsky and Seymour Papert published the book Perceptrons, which
described the limitations of simple neural networks and caused neural network
research to decline and symbolic AI research to thrive.
1973
James Lighthill released the report "Artificial Intelligence: A General Survey," which
caused the British government to significantly reduce support for AI research.
1980
1981
Danny Hillis designed parallel computers for AI and other computational tasks, an
architecture similar to modern GPUs.
1984
Marvin Minsky and Roger Schank coined the term AI winter at a meeting of the
Association for the Advancement of Artificial Intelligence, warning the business
community that AI hype would lead to disappointment and the collapse of the
industry, which happened three years later.
1985
1988
1989
Yann LeCun, Yoshua Bengio and Patrick Haffner demonstrated how convolutional
neural networks (CNNs) can be used to recognize handwritten characters, showing
that neural networks could be applied to real-world problems.
1997
20
Sepp Hochreiter and Jürgen Schmidhuber proposed the Long Short-Term
Memory recurrent neural network, which could process entire sequences of data
such as speech or video.
IBM's Deep Blue defeated Garry Kasparov in a historic chess rematch, the first
defeat of a reigning world chess champion by a computer under tournament
conditions.
2000
2006
IBM Watson originated with the initial goal of beating a human on the iconic quiz
show Jeopardy! In 2011, the question-answering computer system defeated the
show's all-time (human) champion, Ken Jennings.
2009
2011
Jürgen Schmidhuber, Dan Claudiu Cireșan, Ueli Meier and Jonathan Masci
developed the first CNN to achieve "superhuman" performance by winning the
German Traffic Sign Recognition competition.
2012
Geoffrey Hinton, Ilya Sutskever and Alex Krizhevsky introduced a deep CNN
architecture that won the ImageNet challenge and triggered the explosion of deep
learning process and implementation.
21
2013
2014
Facebook developed the deep learning facial recognition system DeepFace, which
identifies human faces in digital images with near-human accuracy.
2016
DeepMind's AlphaGo defeated top Go player Lee Sedol in Seoul, South Korea,
drawing comparisons to the Kasparov chess match with Deep Blue nearly 20 years
earlier.
Uber started a self-driving car pilot program in Pittsburgh for a select group of
users.
2017
22
Google researchers developed the concept of transformers in the seminal paper
"Attention Is All You Need," inspiring subsequent research into tools that could
automatically parse unlabeled text into large language models (LLMs).
British physicist Stephen Hawking warned, "Unless we learn how to prepare for, and
avoid, the potential risks, AI could be the worst event in the history of our civilization."
2018
Developed by IBM, Airbus and the German Aerospace Center DLR, Cimon was the
first robot sent into space to assist astronauts.
OpenAI released GPT (Generative Pre-trained Transformer), paving the way for
subsequent LLMs.
Groove X unveiled a home mini-robot called Lovot that could sense and affect
mood changes in humans.
2019
2020
The University of Oxford developed an AI test called Curial to rapidly identify COVID-
19 in emergency room patients.
Nvidia announced the beta version of its Omniverse platform to create 3D models
in the physical world.
2021
OpenAI introduced the Dall-E multimodal AI system that can generate images
from text prompts.
23
The University of California, San Diego, created a four-legged soft robot that
functioned on pressurized air instead of electronics.
2022
Google software engineer Blake Lemoine was fired for revealing secrets of Lamda
and claiming it was sentient.
Intel claimed its FakeCatcher real-time deepfake detector was 96% accurate.
2023
OpenAI announced the GPT-4 multimodal LLM that receives both text and image
prompts.
24
Beyond 2023
In business, 55% of organizations that have deployed AI always consider AI for every
new use case they're evaluating, according to a 2023 Gartner survey. By 2026,
Gartner reported, organizations that "operationalize AI transparency, trust and
security will see their AI models achieve a 50% improvement in terms of adoption,
business goals and user acceptance."
25
transparency, accountability, ethics and humanity have emerged and will
continue to clash and seek levels of acceptability among business and society.
26
Turing Test in Artificial Intelligence
The Turing test was developed by Alan Turing(A computer scientist) in 1950. He
proposed that the “Turing test is used to determine whether or not a
computer(machine) can think intelligently like humans”?
The Turing Test is a widely used measure of a machine’s ability to demonstrate
human-like intelligence. It was first proposed by British mathematician and
computer scientist Alan Turing in 1950.
The basic idea of the Turing Test is simple: a human judge engages in a text-based
conversation with both a human and a machine, and then decides which of the
two they believe to be a human. If the judge is unable to distinguish between the
human and the machine based on the conversation, then the machine is said to
have passed the Turing Test.
The Turing Test is widely used as a benchmark for evaluating the progress of
artificial intelligence research, and has inspired numerous studies and experiments
aimed at developing machines that can pass the test.
While the Turing Test has been used as a measure of machine intelligence for over
six decades, it is not without its critics. Some argue that the test is too focused on
language and does not take into account other important aspects of intelligence,
such as perception, problem-solving, and decision-making.
Despite its limitations, the Turing Test remains an important reference point in the
field of artificial intelligence and continues to inspire new research and
development in this area.
Imagine a game of three players having two humans and one computer, an
interrogator(as a human) is isolated from the other two players. The interrogator’s
job is to try and figure out which one is human and which one is a computer by
asking questions from both of them. To make things harder computer is trying to
make the interrogator guess wrongly. In other words, computers would try to be
indistinguishable from humans as much as possible.
27
The “standard interpretation” of the Turing Test, in which player C, the interrogator,
is given the task of trying to determine which player – A or B – is a computer and
which is a human. The interrogator is limited to using the responses to written
questions to make the determination
The conversation between interrogator and computer would be like this:
C(Interrogator): Are you a computer?
A(Computer): No
C: Multiply one large number to another, 158745887 * 56755647
A: After a long pause, an incorrect answer!
C: Add 5478012, 4563145
A: (Pause about 20 seconds and then give an answer)10041157
If the interrogator wouldn’t able to distinguish the answers provided by both
humans and computers then the computer passes the test and the
machine(computer) is considered as intelligent as a human. In other words, a
computer would be considered intelligent if its conversation couldn’t be easily
distinguished from a human’s. The whole conversation would be limited to a text-
only channel such as a computer keyboard and screen.
He also proposed that by the year 2000 a computer “would be able to play the
imitation game so well that an average interrogator will not have more than a 70-
percent chance of making the right identification (machine or human) after five
minutes of questioning.” No computer has come close to this standard.
But in the year 1980, Mr. John Searle proposed the “Chinese room argument“. He
argued that the Turing test could not be used to determine “whether or not a
machine is considered as intelligent like humans”. He argued that any machine
28
like ELIZA and PARRY could easily pass the Turing Test simply by manipulating
symbols of which they had no understanding. Without understanding, they could
not be described as “thinking” in the same sense people do.
29
Distance function :
A distance function provides distance between the elements of a set.
A metric or distance function is a function d which takes pairs of points or
objects to real numbers and satisfies the following rules:
30
A distance function provides distance between the elements of a set. If the
distance is zero then elements are equivalent else they are different from
each other.
L -norm ǀxǀ
1
1 6 6.000
L -norm ǀxǀ
2
2 √14 3.742
L -norm ǀxǀ
3
3 3.302
L -norm ǀxǀ
4
4 3.147
Euclidean Norm
The most commonly encountered vector norm(often simply called the
norm of a vector or sometimes called the magnitude of a vector) is the L2
norm given by
31
It is commonly known as Euclidean Norm.
The n-dimensional Euclidean space ,the intuitive notion of length of the
vector x=(x ,x ,…,x ) is
1 2 n
This is the Euclidean norm, which gives the ordinary distance from the
origin to the point X — a consequence of the Pythagorean theorem.
Distance between two points in 2D
If the points X=(x1, y1) and Y=(x2, y2) are in 2-dimensional space, then the
Euclidean distance between them is
32
What is Graph
33
34
Correlation in Statistics
Methods of correlation summarize the relationship between two variables
in a single number called the correlation coefficient. The correlation
coefficient is usually represented using the symbol r, and it ranges from -
1 to +1.
A correlation coefficient quite close to 0, but either positive or negative,
implies little or no relationship between the two variables. A correlation
coefficient close to plus 1 means a positive relationship between the two
variables, with increases in one of the variables being associated with
increases in the other variable.
A correlation coefficient close to -1 indicates a negative relationship
between two variables, with an increase in one of the variables being
associated with a decrease in the other variable.
For example, there exists a correlation between two variables X and Y,
which means the value of one variable is found to change in one direction,
the value of the other variable is found to change either in the same
direction (i.e. positive change) or in the opposite direction (i.e. negative
change). Furthermore, if the correlation exists, it is linear, i.e. we can
represent the relative movement of the two variables by drawing a straight
line on graph paper.
Correlation Coefficient
The correlation coefficient, r, is a summary measure that describes the
extent of the statistical relationship between two variables. The correlation
coefficient is scaled so that it is always between -1 and +1. When r is close
to 0 this means that there is little relationship between the variables and
the farther away from 0 r is, in either the positive or negative direction, the
greater the relationship between the two variables.
Types of Correlation
The scatter plot explains the correlation between the two attributes or
variables. It represents how closely the two variables are connected.
There can be three such situations to see the relation between the two
variables –
Positive Correlation – when the values of the two variables move in
the same direction so that an increase/decrease in the value of one
variable is followed by an increase/decrease in the value of the other
variable.
35
experience falling consumer demand, resulting in downward
pressure on prices and inflation.
Negative Correlation – when the values of the two variables move
in the opposite direction so that an increase/decrease in the value
of one variable is followed by decrease/increase in the value of the
other variable.
36
Correlation Formula
Correlation shows the relation between two variables. Correlation
coefficient shows the measure of correlation. To compare two
datasets, we use the correlation formulas.
Pearson Correlation Coefficient Formula
The most common formula is the Pearson Correlation coefficient
used for linear dependency between the data sets. The value of
the coefficient lies between -1 to +1. When the coefficient comes
down to zero, then the data is considered as not related. While, if
we get the value of +1, then the data are positively correlated, and
-1 has a negative correlation.
37
Where n = Quantity of Information
Σx = Total of the First Variable Value
Σy = Total of the Second Variable Value
Σxy = Sum of the Product of first & Second Value
Σx = Sum of the Squares of the First Value
2
Σy 2
= Sum of the Squares of the Second Value
Linear Correlation Coefficient Formula
The formula for the linear correlation coefficient is given by;
When using the Pearson correlation coefficient formula, you’ll need to consider
whether you’re dealing with data from a sample or the whole population.The sample
and population formulas differ in their symbols and inputs. A sample correlation
coefficient is called r, while a population correlation coefficient is called rho, the Greek
letter ρ.
Sample Correlation Coefficient Formula-
The formula is given by:
Solution:
To simplify the calculation, we divide both x and y by 100.
6 12 -2 -2 4 4 4
8 10 0 -4 0 16 0
10 20 2 6 4 36 12
39
Pearson correlation coefficient for population =
r = 0.756
Example 2. A survey was conducted in your city. Given is the following sample data
containing a person's age and their corresponding income. Find out whether the
increase in age has an effect on income using the correlation coefficient formula. (Use
Age 25 30 36 43
Solution:
To simplify the calculation, we divide y by 1000.
40
30 44 - -5 12.25 25 17.5
3.5
Therefore r=0.9923
x 41 42 43 44 45
41
y 3.2 3.3 3.4 3.5 3.6
Solution:
Here n = 5
x y xy x2
y2
X values:
∑x = 215
∑x = 9255
2
x̄ = 43
∑(x - x̄) =10
2
Y values:
42
∑y = 17
∑y = 57.9
2
∑(y - ȳ) = 0.1
2
X and Y combined
N=5
∑((x - x̄)(y - ȳ)) = 1
∑xy = 732
R calculation:
r = = 1/√((10)(0.1)) = 1
Since r = 1, this indicates a significant relation between x and y.
Regression Analysis
Regression analysis refers to assessing the relationship between
the outcome variable and one or more variables. The outcome
variable is known as the dependent and co-founders are
known independent variables. The dependent variable is shown
by “y” and independent variables are shown by “x” in regression
analysis.
Linear Regression
Linear regression is a linear approach to modelling the
relationship between the scalar components and one or more
independent variables. If the regression has one independent
variable, then it is known as a simple linear regression. If it has
more than one independent variable, then it is known as multiple
linear regression.
43
· Regression helps economists and financial analysts in things
ranging from asset valuation to making predictions.
· In order for regression results to be properly interpreted, several
assumptions about the data and the model itself must hold.
Where,
x and y are two variables on the regression line.
b = Slope of the line.
a = y-intercept of the line.
x = Values of the first data set.
y = Values of the second data set.
Solved Examples
Question: Find linear regression equation for the following two
sets of data:
x 2 4 6 8
y 3 7 5 10
Solution:
44
x y x2
xy
2 3 4 6
4 7 16 28
6 5 36 30
8 10 64 80
b=
b=0.95
a=
a = 1.5
Linear regression is given by:
y = a + bx
y = 1.5 + 0.95 x
Correlation and Regression Differences
45
There are some differences between Correlation and regression.
Correlation shows the quantity of the degree to which two variables are
associated. It does not fix a line through the data points. You compute a
correlation that shows how much one variable changes when the other remains
constant.
Linear regression finds the best line that predicts y from x, but Correlation does
not fit a line.
Correlation is used when you measure both variables, while linear regression is
mostly applied when x is a variable that is manipulated.
46
Dependent and No difference Both variables are different.
Independent
variables
References
https://towardsdatascience.com
https://www.geeksforgeeks.org
https://www.javatpoint.com
https://www.simplilearn.com
47