0% found this document useful (0 votes)
14 views

AI_XI_SEC3

The document provides a comprehensive overview of Artificial Intelligence (AI), covering its history, foundational concepts, types, and applications across various industries. It distinguishes between weak AI and strong AI, discusses the principles of machine learning and deep learning, and outlines the advantages and disadvantages of AI. Additionally, it highlights notable examples of AI in everyday life and addresses frequently asked questions about the technology.

Uploaded by

SOUMIK PAL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

AI_XI_SEC3

The document provides a comprehensive overview of Artificial Intelligence (AI), covering its history, foundational concepts, types, and applications across various industries. It distinguishes between weak AI and strong AI, discusses the principles of machine learning and deep learning, and outlines the advantages and disadvantages of AI. Additionally, it highlights notable examples of AI in everyday life and addresses frequently asked questions about the technology.

Uploaded by

SOUMIK PAL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

STUDY MATERIAL

Foundation for AI

 10 MARKS IN Artificial Intelligence [ ARTI ] ---- CLASS XI

1
Syllabus for Section 3 : Foundation for AI [ 08 Marks ] -ARTI
 History of AI : Alan Turing and cracking enigma , mark 1 machines , 1956-
the birth of the term AI , AI winter of 70’s , expert systems of 1980s, skipped
journey of present day AI, pattern recognition and Machine learning
 Introduction to Linear Algebra and statistics for AI :
 Basic matrix operations like matrix addition , subtraction , multiplication ,
transpose of matrix , identity matrix.---refer to any standard Maths text
book of Class XI,XII
 Brief introduction to vectors , unit vector , normal vector , Euclidean
space.---refer to any standard Maths text book of Class XI,XII
 Correlation , Regression, Introduction to Graphs ( Basic idea )
 Probability distribution , frequency , mean , median and mode , variance
and standard deviation , Gaussian distribution.------refer to any
standard Maths text book of Class XI,XII
 Distance function , Euclidean norm , distance between two points in 2D and
3D and extension of idea to n dimensions.

2
CONTENTS
Sl Topic Page
No No
1 AI-Basic Concept 4
2 History of AI 17
3 Turing Test 27
4 Distance function , Euclidean Norm , Distance 30
between two points in 2D , 3D and n dimension
5 Graphs 34
6 Correlation and Regression 35

3
What Is Artificial Intelligence?
Artificial Intelligence is currently one of the hottest buzzwords in tech and with good
reason. The last few years have seen several innovations and advancements that
have previously been solely in the realm of science fiction slowly transform into
reality.
Experts regard artificial intelligence as a factor of production, which has the
potential to introduce new sources of growth and change the way work is done
across industries. China and the United States are primed to benefit the most from
the coming AI boom, accounting for nearly 70% of the global impact.
Artificial Intelligence is a method of making a computer, a computer-controlled
robot, or a software think intelligently like the human mind. AI is accomplished
by studying the patterns of the human brain and by analyzing the cognitive
process. The outcome of these studies develops intelligent software and
systems.

Weak AI vs. Strong AI


When discussing artificial intelligence (AI), it is common to distinguish between two
broad categories: weak AI and strong AI.
Weak AI (Narrow AI)
Weak AI refers to AI systems that are designed to perform specific tasks and are
limited to those tasks only. These AI systems excel at their designated functions but
lack general intelligence. Examples of weak AI include voice assistants like Siri or
Alexa, recommendation algorithms, and image recognition systems. Weak AI

4
operates within predefined boundaries and cannot generalize beyond their
specialized domain.
Strong AI (General AI)
Strong AI, also known as general AI, refers to AI systems that possess human-level
intelligence or even surpass human intelligence across a wide range of tasks.
Strong AI would be capable of understanding, reasoning, learning, and applying
knowledge to solve complex problems in a manner similar to human cognition.
However, the development of strong AI is still largely theoretical and has not been
achieved to date.

Types of Artificial Intelligence

5
1. Narrow AI (or Weak AI): Specialized AI designed for specific tasks.

2. General AI (or Strong AI): Human-level intelligence, still theoretical.

3. Artificial Superintelligence (ASI): Hypothetical AI surpassing human intellect.

1. Purely Reactive
These machines do not have any memory or data to work with, specializing in just
one field of work. For example, in a chess game, the machine observes the moves
and makes the best possible decision to win.

2. Limited Memory
These machines collect previous data and continue adding it to their memory. They
have enough memory or experience to make proper decisions, but memory is
minimal. For example, this machine can suggest a restaurant based on the location
data that has been gathered.

3. Theory of Mind
This kind of AI can understand thoughts and emotions, as well as interact socially.
However, a machine based on this type is yet to be built.

4. Self-Aware
Self-aware machines are the future generation of these new technologies . They
will be intelligent, sentient, and conscious.

Deep Learning vs. Machine Learning

6
Machine Learning:
Machine Learning focuses on the development of algorithms and models that
enable computers to learn from data and make predictions or decisions without
explicit programming. Here are key characteristics of machine learning:

1. Feature Engineering: In machine learning, experts manually engineer or


select relevant features from the input data to aid the algorithm in making
accurate predictions.
2. Supervised and Unsupervised Learning: Machine learning algorithms can
be categorized into supervised learning, where models learn from labeled
data with known outcomes, and unsupervised learning, where algorithms
discover patterns and structures in unlabeled data.
3. Broad Applicability: Machine learning techniques find application across
various domains, including image and speech recognition, natural
language processing, and recommendation systems.

Deep Learning:
Deep Learning is a subset of machine learning that focuses on training artificial
neural networks inspired by the human brain's structure and functioning. Here are
key characteristics of deep learning:

1. Automatic Feature Extraction: Deep learning algorithms have the ability


to automatically extract relevant features from raw data, eliminating the
need for explicit feature engineering.
2. Deep Neural Networks: Deep learning employs neural networks with
multiple layers of interconnected nodes (neurons), enabling the learning
of complex hierarchical representations of data.
3. High Performance: Deep learning has demonstrated exceptional
performance in domains such as computer vision, natural language
processing, and speech recognition, often surpassing traditional machine
learning approaches.

7
How Does Artificial Intelligence Work?
Put simply, AI systems work by merging large with intelligent, iterative processing
algorithms. This combination allows AI to learn from patterns and features in the
analyzed data. Each time an Artificial Intelligence system performs a round of data
processing, it tests and measures its performance and uses the results to develop
additional expertise.

Ways of Implementing AI

Machine Learning

It is machine learning that gives AI the ability to learn. This is done by


using algorithms to discover patterns and generate insights from the data they are
exposed to.
Deep Learning
Deep learning, which is a subcategory of machine learning, provides AI with the
ability to mimic a human brain’s neural network. It can make sense of patterns,
noise, and sources of confusion in the data.

AI Programming Cognitive Skills: Learning, Reasoning and Self-Correction

8
Artificial Intelligence emphasizes three cognitive skills of learning, reasoning, and
self-correction, skills that the human brain possess to one degree or another. We
define these in the context of AI as:

 Learning: The acquisition of information and the rules needed to use that
information.
 Reasoning: Using the information rules to reach definite or approximate
conclusions.
 Self-Correction: The process of continually fine-tuning AI algorithms and
ensuring that they offer the most accurate results they can.

However, researchers and programmers have extended and elaborated the goals
of AI to the following:

1. Logical Reasoning

AI programs enable computers to perform sophisticated tasks. On


February 10, 1996, IBM’s Deep Blue computer won a game of chess against
a former world champion, Garry Kasparov.

2. Knowledge Representation

Smalltalk is an object-oriented, dynamically typed, reflective


programming language that was created to underpin the “new world” of
computing exemplified by “human-computer symbiosis.”

3. Planning and Navigation

The process of enabling a computer to get from point A to point B. A prime


example of this is Google’s self-driving Toyota Prius.

4. Natural Language Processing

Set up computers that can understand and process language.

5. Perception

Use computers to interact with the world through sight, hearing, touch,
and smell.

9
6. Emergent Intelligence

Intelligence that is not explicitly programmed, but emerges from the rest
of the specific AI features. The vision for this goal is to have machines
exhibit emotional intelligence and moral reasoning.
Some of the tasks performed by AI-enabled devices include:

 Speech recognition
 Object detection
 Solve problems and learn from the given data
 Plan an approach for future tests to be done

Advantages and Disadvantages of AI


Artificial intelligence has its pluses and minuses, much like any other concept or
innovation. Here’s a quick rundown of some pros and cons.

Pros

 It reduces human error


 It never sleeps, so it’s available 24x7
 It never gets bored, so it easily handles repetitive tasks
 It’s fast

Cons
 It’s costly to implement
 It can’t duplicate human creativity
 It will definitely replace some jobs, leading to unemployment
 People can become overly reliant on it

Applications of Artificial Intelligence


Artificial intelligence (AI) has a wide range of applications across various industries
and domains. Here are some notable applications of AI:

 Natural Language Processing (NLP)


AI is used in NLP to analyze and understand human language. It powers
applications such as speech recognition, machine translation, sentiment analysis,
and virtual assistants like Siri and Alexa.

 Image and Video Analysis

10
AI techniques, including computer vision, enable the analysis and interpretation of
images and videos. This finds application in facial recognition, object detection and
tracking, content moderation, medical imaging, and autonomous vehicles.

 Robotics and Automation


AI plays a crucial role in robotics and automation systems. Robots equipped with
AI algorithms can perform complex tasks in manufacturing, healthcare, logistics,
and exploration. They can adapt to changing environments, learn from experience,
and collaborate with humans.

 Recommendation Systems
AI-powered recommendation systems are used in e-commerce, streaming
platforms, and social media to personalize user experiences. They analyze user
preferences, behavior, and historical data to suggest relevant products, movies,
music, or content.

 Financial Services
AI is extensively used in the finance industry for fraud detection, algorithmic trading,
credit scoring, and risk assessment. Machine learning models can analyze vast
amounts of financial data to identify patterns and make predictions.

 Healthcare
AI applications in healthcare include disease diagnosis, medical imaging analysis,
drug discovery, personalized medicine, and patient monitoring. AI can assist in
identifying patterns in medical data and provide insights for better diagnosis and
treatment.

 Virtual Assistants and Chatbots


AI-powered virtual assistants and chatbots interact with users, understand their
queries, and provide relevant information or perform tasks. They are used in
customer support, information retrieval, and personalized assistance.

 Gaming
AI algorithms are employed in gaming for creating realistic virtual characters,
opponent behavior, and intelligent decision-making. AI is also used to optimize
game graphics, physics simulations, and game testing.

11
 Smart Homes and IoT
AI enables the development of smart home systems that can automate tasks,
control devices, and learn from user preferences. AI can enhance the functionality
and efficiency of Internet of Things (IoT) devices and networks.

 Cybersecurity
AI helps in detecting and preventing cyber threats by analyzing network traffic,
identifying anomalies, and predicting potential attacks. It can enhance the security
of systems and data through advanced threat detection and response
mechanisms.
These are just a few examples of how AI is applied in various fields. The potential of
AI is vast, and its applications continue to expand as technology advances.

Artificial Intelligence Examples


Artificial Intelligence (AI) has become an integral part of our daily lives,
revolutionizing various industries and enhancing user experiences. Here are some
notable examples of AI applications:

ChatGPT is an advanced language model developed by OpenAI, capable of


generating human-like responses and engaging in natural language
conversations. It uses deep learning techniques to understand and generate
coherent text, making it useful for customer support, chatbots, and virtual
assistants.

12
Google Maps utilizes AI algorithms to provide real-time navigation, traffic updates,
and personalized recommendations. It analyzes vast amounts of data, including
historical traffic patterns and user input, to suggest the fastest routes, estimate
arrival times, and even predict traffic congestion.

Smart Assistants

Smart assistants like Amazon's Alexa, Apple's Siri, and Google Assistant employ AI
technologies to interpret voice commands, answer questions, and perform tasks.
These assistants use natural language processing and machine learning
algorithms to understand user intent, retrieve relevant information, and carry out
requested actions.

Snapchat Filters

Snapchat's augmented reality filters, or "Lenses," incorporate AI to recognize facial


features, track movements, and overlay interactive effects on users' faces in real-
time. AI algorithms enable Snapchat to apply various filters, masks, and animations
that align with the user's facial expressions and movements.

Self-Driving Cars

Self-driving cars rely heavily on AI for perception, decision-making, and control.


Using a combination of sensors, cameras, and machine learning algorithms, these
vehicles can detect objects, interpret traffic signs, and navigate complex road
conditions autonomously, enhancing safety and efficiency on the roads.

13
Wearables

Wearable devices, such as fitness trackers and smartwatches, utilize AI to monitor


and analyze users' health data. They track activities, heart rate, sleep patterns, and
more, providing personalized insights and recommendations to improve overall
well-being.

MuZero
MuZero is an AI algorithm developed by DeepMind that combines reinforcement
learning and deep neural networks. It has achieved remarkable success in playing
complex board games like chess, Go, and shogi at a superhuman level. MuZero
learns and improves its strategies through self-play and planning.
These examples demonstrate the wide-ranging applications of AI, showcasing its
potential to enhance our lives, improve efficiency, and drive innovation across
various industries.

FAQs

1. Where is AI used?
Artificial intelligence is frequently utilized to present individuals with personalized
suggestions based on their prior searches and purchases and other online
behavior. AI is extremely crucial in commerce, such as product optimization,
inventory planning, and logistics. Machine learning, cybersecurity, customer
relationship management, internet searches, and personal assistants are some of
the most common applications of AI. Voice assistants, picture recognition for face
unlocking in cell phones, and ML-based financial fraud detection are all examples
of AI software that is now in use.

2. What is artificial intelligence in simple words?


Artificial Intelligence (AI) in simple words refers to the ability of machines or
computer systems to perform tasks that typically require human intelligence. It is
a field of study and technology that aims to create machines that can learn from
experience, adapt to new information, and carry out tasks without explicit
programming. Artificial Intelligence (AI) refers to the simulation of human

14
intelligence in machines that are programmed to think like humans and mimic
their actions.

3. What Are the 4 Types of AI?


The current categorization system categorizes AI into four basic categories:
reactive, theory of mind, limited memory, and self-aware.

4. How Is AI Used Today?


Machines today can learn from experience, adapt to new inputs, and even perform
human-like tasks with help from artificial intelligence (AI). Artificial intelligence
examples today, from chess-playing computers to self-driving cars, are heavily
based on deep learning and natural language processing. There are several
examples of AI software in use in daily life, including voice assistants, face
recognition for unlocking mobile phones and machine learning-based financial
fraud detection. AI software is typically obtained by downloading AI-capable
software from an internet marketplace, with no additional hardware required.

5. What are some examples of AI in everyday life?


Examples of AI in everyday life include voice assistants like Siri or Alexa,
recommendation engines like Netflix's movie recommendations, and autonomous
vehicles.

6. How is AI helping in our life?


AI and ML-powered software and gadgets mimic human brain processes to assist
society in advancing with the digital revolution. AI systems perceive their
environment, deal with what they observe, resolve difficulties, and take action to
help with duties to make daily living easier. People check their social media
accounts on a frequent basis, including Facebook, Twitter, Instagram, and other
sites. AI is not only customizing your feeds behind the scenes, but it is also
recognizing and deleting bogus news. So, AI is assisting you in your daily life.

7. What are the three types of AI?

The three types of AI are:

1. Artificial Narrow Intelligence (ANI): Also known as Weak AI, it specializes in


performing specific tasks and lacks general cognitive abilities.
2. Artificial General Intelligence (AGI): Refers to Strong AI capable of
understanding, learning, and applying knowledge across various
domains, similar to human intelligence.

15
3. Artificial Superintelligence (ASI): Hypothetical AI surpassing human
intelligence in all aspects, potentially capable of solving complex
problems and making advancements beyond human comprehension.

8. Is AI dangerous?
Aside from planning for a future with super-intelligent computers, artificial
intelligence in its current state might already offer problems.

9. What are the advantages of AI?


The advantages of AI include reducing the time it takes to complete a task,
reducing the cost of previously done activities, continuously and without
interruption, with no downtime, and improving the capacities of people with
disabilities.

10. What are the 7 main areas of AI?

The main seven areas of AI are:

1. Machine Learning: Involves algorithms that enable machines to learn


from data and improve their performance without explicit programming.
2. Natural Language Processing (NLP): Focuses on enabling computers to
understand, interpret, and generate human language.
3. Computer Vision: Deals with giving machines the ability to interpret and
understand visual information from images or videos.
4. Robotics: Combines AI and mechanical engineering to create intelligent
machines capable of performing tasks autonomously.
5. Expert Systems: Utilizes knowledge and reasoning to solve complex
problems in specific domains, mimicking human expertise.
6. Speech Recognition: Involves converting spoken language into text or
commands, enabling machines to interact with users through speech.
7. Planning and Decision Making: Focuses on algorithms that allow AI
systems to make choices and optimize actions to achieve specific goals.

16
History of AI

Artificial intelligence, or at least the modern concept of it, has been with us for
several decades, but only in the recent past has AI captured the collective psyche
of everyday business and society.

The introduction of AI in the 1950s very much paralleled the beginnings of the
Atomic Age. Though their evolutionary paths have differed, both technologies are
viewed as posing an existential threat to humanity.

Perceptions about the darker side of AI aside, artificial intelligence tools and
technologies, since the advent of the Turing test in 1950 have made incredible
strides -- despite the intermittent roller-coaster rides mainly due to funding fits and
starts for AI research. Many of these breakthrough advancements have flown under
the radar, visible mostly to academic, government and scientific research circles
until the past decade or so, when AI was practically applied to the wants and needs
of the masses. AI products such as Apple's Siri and Amazon's Alexa, online shopping,

17
social media feeds and self-driving cars have forever altered the lifestyles of
consumers and operations of businesses.

Through the decades, some of the more notable developments include the
following:

 Neural networks and the coining of the terms artificial


intelligence and machine learning in the 1950s.

 Eliza, the chatbot with cognitive capabilities, and Shakey, the first mobile
intelligent robot, in the 1960s.

 AI winter followed by AI renaissance in the 1970s and 1980s.

 Speech and video processing in the 1990s.

 IBM Watson, personal assistants, facial recognition, deepfakes,


autonomous vehicles, and content and image creation in the 2000s.

1950

Alan Turing published "Computing Machinery and Intelligence," introducing the


Turing test and opening the doors to what would be known as AI.

1951

Marvin Minsky and Dean Edmonds developed the first artificial neural network
(ANN) called SNARC using 3,000 vacuum tubes to simulate a network of 40 neurons.

1952

Arthur Samuel developed Samuel Checkers-Playing Program, the world's first


program to play games that was self-learning.

1956

John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon coined
the term artificial intelligence in a proposal for a workshop widely recognized as a
founding event in the AI field.

1958

Frank Rosenblatt developed the perceptron, an early ANN that could learn from
data and became the foundation for modern neural networks.

John McCarthy developed the programming language Lisp, which was quickly
adopted by the AI industry and gained enormous popularity among developers.

18
1959

Arthur Samuel coined the term machine learning in a seminal paper explaining
that the computer could be programmed to outplay its programmer.

Oliver Selfridge published "Pandemonium: A Paradigm for Learning," a landmark


contribution to machine learning that described a model that could adaptively
improve itself to find patterns in events.

1964

Daniel Bobrow developed STUDENT, an early natural language processing (NLP)


program designed to solve algebra word problems, while he was a doctoral
candidate at MIT.

1965

Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg and Carl Djerassi


developed the first expert system, Dendral, which assisted organic chemists in
identifying unknown organic molecules.

1966

Joseph Weizenbaum created Eliza, one of the more celebrated computer


programs of all time, capable of engaging in conversations with humans and
making them believe the software had humanlike emotions.

Stanford Research Institute developed Shakey, the world's first mobile intelligent
robot that combined AI, computer vision, navigation and NLP. It's the grandfather of
self-driving cars and drones.

1968

Terry Winograd created SHRDLU, the first multimodal AI that could manipulate and
reason out a world of blocks according to instructions from a user.

1969

Arthur Bryson and Yu-Chi Ho described a backpropagation learning algorithm to


enable multilayer ANNs, an advancement over the perceptron and a foundation for
deep learning.

19
Marvin Minsky and Seymour Papert published the book Perceptrons, which
described the limitations of simple neural networks and caused neural network
research to decline and symbolic AI research to thrive.

1973

James Lighthill released the report "Artificial Intelligence: A General Survey," which
caused the British government to significantly reduce support for AI research.

1980

Symbolics Lisp machines were commercialized, signaling an AI renaissance. Years


later, the Lisp machine market collapsed.

1981

Danny Hillis designed parallel computers for AI and other computational tasks, an
architecture similar to modern GPUs.

1984

Marvin Minsky and Roger Schank coined the term AI winter at a meeting of the
Association for the Advancement of Artificial Intelligence, warning the business
community that AI hype would lead to disappointment and the collapse of the
industry, which happened three years later.

1985

Judea Pearl introduced Bayesian networks causal analysis, which provides


statistical techniques for representing uncertainty in computers.

1988

Peter Brown et al. published "A Statistical Approach to Language Translation,"


paving the way for one of the more widely studied machine translation methods.

1989

Yann LeCun, Yoshua Bengio and Patrick Haffner demonstrated how convolutional
neural networks (CNNs) can be used to recognize handwritten characters, showing
that neural networks could be applied to real-world problems.

Neural networks have differing characteristics.

1997

20
Sepp Hochreiter and Jürgen Schmidhuber proposed the Long Short-Term
Memory recurrent neural network, which could process entire sequences of data
such as speech or video.

IBM's Deep Blue defeated Garry Kasparov in a historic chess rematch, the first
defeat of a reigning world chess champion by a computer under tournament
conditions.

2000

University of Montreal researchers published "A Neural Probabilistic Language


Model," which suggested a method to model language using feedforward neural
networks.

2006

Fei-Fei Li started working on the ImageNet visual database, introduced in 2009,


which became a catalyst for the AI boom and the basis of an annual competition
for image recognition algorithms.

IBM Watson originated with the initial goal of beating a human on the iconic quiz
show Jeopardy! In 2011, the question-answering computer system defeated the
show's all-time (human) champion, Ken Jennings.

2009

Rajat Raina, Anand Madhavan and Andrew Ng published "Large-Scale Deep


Unsupervised Learning Using Graphics Processors," presenting the idea of using
GPUs to train large neural networks.

2011

Jürgen Schmidhuber, Dan Claudiu Cireșan, Ueli Meier and Jonathan Masci
developed the first CNN to achieve "superhuman" performance by winning the
German Traffic Sign Recognition competition.

Apple released Siri, a voice-powered personal assistant that can generate


responses and take actions in response to voice requests.

2012

Geoffrey Hinton, Ilya Sutskever and Alex Krizhevsky introduced a deep CNN
architecture that won the ImageNet challenge and triggered the explosion of deep
learning process and implementation.

21
2013

China's Tianhe-2 doubled the world's top supercomputing speed at 33.86


petaflops, retaining the title of the world's fastest system for the third consecutive
time.

DeepMind introduced deep reinforcement learning, a CNN that learned based on


rewards and learned to play games through repetition, surpassing human expert
levels.

Google researcher Tomas Mikolov and colleagues introduced Word2vec to


automatically identify semantic relationships between words.

2014

Ian Goodfellow and colleagues invented generative adversarial networks, a class


of machine learning frameworks used to generate photos , transform images and
create deepfakes.

Diederik Kingma and Max Welling introduced variational autoencoders to generate


images, videos and text.

Facebook developed the deep learning facial recognition system DeepFace, which
identifies human faces in digital images with near-human accuracy.

2016

DeepMind's AlphaGo defeated top Go player Lee Sedol in Seoul, South Korea,
drawing comparisons to the Kasparov chess match with Deep Blue nearly 20 years
earlier.

Uber started a self-driving car pilot program in Pittsburgh for a select group of
users.

Five pillars of AI technology changing the way businesses operate.

2017

Stanford researchers published work on diffusion models in the paper "Deep


Unsupervised Learning Using Nonequilibrium Thermodynamics." The technique
provides a way to reverse-engineer the process of adding noise to a final image.

22
Google researchers developed the concept of transformers in the seminal paper
"Attention Is All You Need," inspiring subsequent research into tools that could
automatically parse unlabeled text into large language models (LLMs).

British physicist Stephen Hawking warned, "Unless we learn how to prepare for, and
avoid, the potential risks, AI could be the worst event in the history of our civilization."

2018

Developed by IBM, Airbus and the German Aerospace Center DLR, Cimon was the
first robot sent into space to assist astronauts.

OpenAI released GPT (Generative Pre-trained Transformer), paving the way for
subsequent LLMs.

Groove X unveiled a home mini-robot called Lovot that could sense and affect
mood changes in humans.

2019

Microsoft launched the Turing Natural Language Generation generative language


model with 17 billion parameters.

Google AI and Langone Medical Center's deep learning algorithm outperformed


radiologists in detecting potential lung cancers.

2020

The University of Oxford developed an AI test called Curial to rapidly identify COVID-
19 in emergency room patients.

Nvidia announced the beta version of its Omniverse platform to create 3D models
in the physical world.

DeepMind's AlphaFold system won the Critical Assessment of Protein Structure


Prediction protein-folding contest.

2021

OpenAI introduced the Dall-E multimodal AI system that can generate images
from text prompts.

23
The University of California, San Diego, created a four-legged soft robot that
functioned on pressurized air instead of electronics.

2022

Google software engineer Blake Lemoine was fired for revealing secrets of Lamda
and claiming it was sentient.

DeepMind unveiled AlphaTensor "for discovering novel, efficient and provably


correct algorithms".

Intel claimed its FakeCatcher real-time deepfake detector was 96% accurate.

OpenAI released ChatGPT in November to provide a chat-based interface to its


GPT-3.5 LLM.

2023

OpenAI announced the GPT-4 multimodal LLM that receives both text and image
prompts.

24
Beyond 2023

We can only begin to envision AI’s continuing technological advancements and


influences in business processes, manufacturing, healthcare, financial services,
marketing, customer experience, workforce environments, education, agriculture,
law, IT systems and management, cybersecurity, and ground, air and space
transportation.

In business, 55% of organizations that have deployed AI always consider AI for every
new use case they're evaluating, according to a 2023 Gartner survey. By 2026,
Gartner reported, organizations that "operationalize AI transparency, trust and
security will see their AI models achieve a 50% improvement in terms of adoption,
business goals and user acceptance."

Today's tangible developments -- some incremental, some disruptive -- are


advancing AI's ultimate goal of achieving artificial general intelligence. Along these
lines, neuromorphic processing shows promise in mimicking human brain cells,
enabling computer programs to work simultaneously instead of sequentially. Amid
these and other mind-boggling advancements, issues of trust, privacy,

25
transparency, accountability, ethics and humanity have emerged and will
continue to clash and seek levels of acceptability among business and society.

26
Turing Test in Artificial Intelligence

The Turing test was developed by Alan Turing(A computer scientist) in 1950. He
proposed that the “Turing test is used to determine whether or not a
computer(machine) can think intelligently like humans”?
The Turing Test is a widely used measure of a machine’s ability to demonstrate
human-like intelligence. It was first proposed by British mathematician and
computer scientist Alan Turing in 1950.
The basic idea of the Turing Test is simple: a human judge engages in a text-based
conversation with both a human and a machine, and then decides which of the
two they believe to be a human. If the judge is unable to distinguish between the
human and the machine based on the conversation, then the machine is said to
have passed the Turing Test.
The Turing Test is widely used as a benchmark for evaluating the progress of
artificial intelligence research, and has inspired numerous studies and experiments
aimed at developing machines that can pass the test.
While the Turing Test has been used as a measure of machine intelligence for over
six decades, it is not without its critics. Some argue that the test is too focused on
language and does not take into account other important aspects of intelligence,
such as perception, problem-solving, and decision-making.
Despite its limitations, the Turing Test remains an important reference point in the
field of artificial intelligence and continues to inspire new research and
development in this area.
Imagine a game of three players having two humans and one computer, an
interrogator(as a human) is isolated from the other two players. The interrogator’s
job is to try and figure out which one is human and which one is a computer by
asking questions from both of them. To make things harder computer is trying to
make the interrogator guess wrongly. In other words, computers would try to be
indistinguishable from humans as much as possible.

27
The “standard interpretation” of the Turing Test, in which player C, the interrogator,
is given the task of trying to determine which player – A or B – is a computer and
which is a human. The interrogator is limited to using the responses to written
questions to make the determination
The conversation between interrogator and computer would be like this:
C(Interrogator): Are you a computer?
A(Computer): No
C: Multiply one large number to another, 158745887 * 56755647
A: After a long pause, an incorrect answer!
C: Add 5478012, 4563145
A: (Pause about 20 seconds and then give an answer)10041157
If the interrogator wouldn’t able to distinguish the answers provided by both
humans and computers then the computer passes the test and the
machine(computer) is considered as intelligent as a human. In other words, a
computer would be considered intelligent if its conversation couldn’t be easily
distinguished from a human’s. The whole conversation would be limited to a text-
only channel such as a computer keyboard and screen.
He also proposed that by the year 2000 a computer “would be able to play the
imitation game so well that an average interrogator will not have more than a 70-
percent chance of making the right identification (machine or human) after five
minutes of questioning.” No computer has come close to this standard.
But in the year 1980, Mr. John Searle proposed the “Chinese room argument“. He
argued that the Turing test could not be used to determine “whether or not a
machine is considered as intelligent like humans”. He argued that any machine

28
like ELIZA and PARRY could easily pass the Turing Test simply by manipulating
symbols of which they had no understanding. Without understanding, they could
not be described as “thinking” in the same sense people do.

Advantages of the Turing Test in Artificial Intelligence:

1. Evaluating machine intelligence: The Turing Test provides a simple and


well-known method for evaluating the intelligence of a machine.
2. Setting a benchmark: The Turing Test sets a benchmark for artificial
intelligence research and provides a goal for researchers to strive
towards.
3. Inspiring research: The Turing Test has inspired numerous studies and
experiments aimed at developing machines that can pass the test, which
has driven progress in the field of artificial intelligence.
4. Simple to administer: The Turing Test is relatively simple to administer
and can be carried out with just a computer and a human judge.

Disadvantages of the Turing Test in Artificial Intelligence:

1. Limited scope: The Turing Test is limited in scope, focusing primarily on


language-based conversations and not taking into account other
important aspects of intelligence, such as perception, problem-solving,
and decision-making.
2. Human bias: The results of the Turing Test can be influenced by the biases
and preferences of the human judge, making it difficult to obtain
objective and reliable results.
3. Not representative of real-world AI: The Turing Test may not be
representative of the kind of intelligence that machines need to
demonstrate in real-world applications.

29
Distance function :
A distance function provides distance between the elements of a set.
A metric or distance function is a function d which takes pairs of points or
objects to real numbers and satisfies the following rules:

· The distance between an object and itself is always zero.

· The distance between distinct objects is always positive.

· Distance is symmetric: the distance from x to y is always the same as


the distance from y to x.

d(x,y)=d(y,x) for all x,y


· Distance satisfies the triangle inequality: if x, y, and z are three objects,
then d(x,z)≤d(x,y)+d(y,z)

In order to calculate the distance between data points A and B


Pythagorean theorem considers the length of x and y axis.

30
A distance function provides distance between the elements of a set. If the
distance is zero then elements are equivalent else they are different from
each other.

Distance function(vector norm):


Let’s take a n-dimensional vector

A general vector norm ǀxǀ, sometimes written as ǁxǁ is a non-negative


norm defined as
· ǀxǀ>0 when x≠0, and ǀxǀ=0 iff x=0
· ǀkxǀ=ǀkǀǀxǀ for any scalar k
· ǀx+yǀ≤ǀxǀ+ǀyǀ

The vector norm ǀxǀ p=1,2,…. Is defined as


p

Some examples of different norms---


X=(1,2,3)

Name Symbol value Approx

L -norm ǀxǀ
1
1 6 6.000

L -norm ǀxǀ
2
2 √14 3.742

L -norm ǀxǀ
3
3 3.302

L -norm ǀxǀ
4
4 3.147

Euclidean Norm
The most commonly encountered vector norm(often simply called the
norm of a vector or sometimes called the magnitude of a vector) is the L2
norm given by

31
It is commonly known as Euclidean Norm.
The n-dimensional Euclidean space ,the intuitive notion of length of the
vector x=(x ,x ,…,x ) is
1 2 n

This is the Euclidean norm, which gives the ordinary distance from the
origin to the point X — a consequence of the Pythagorean theorem.
Distance between two points in 2D
If the points X=(x1, y1) and Y=(x2, y2) are in 2-dimensional space, then the
Euclidean distance between them is

Distance between two points in 3D


If the points in 3-dimensional space are

Distance between two points in n-dimension


If the points in n dimensional space are

32
What is Graph

In Mathematics, a graph is a pictorial representation of any data in an


organised manner. The graph shows the relationship between variable
quantities. In graph theory, the graph represents the set of objects that
are related in some sense to each other. The objects are basically
mathematical concepts, expressed by vertices or nodes and the relation
between the pair of nodes, are expressed by edges.

33
34
Correlation in Statistics
Methods of correlation summarize the relationship between two variables
in a single number called the correlation coefficient. The correlation
coefficient is usually represented using the symbol r, and it ranges from -
1 to +1.
A correlation coefficient quite close to 0, but either positive or negative,
implies little or no relationship between the two variables. A correlation
coefficient close to plus 1 means a positive relationship between the two
variables, with increases in one of the variables being associated with
increases in the other variable.
A correlation coefficient close to -1 indicates a negative relationship
between two variables, with an increase in one of the variables being
associated with a decrease in the other variable.
For example, there exists a correlation between two variables X and Y,
which means the value of one variable is found to change in one direction,
the value of the other variable is found to change either in the same
direction (i.e. positive change) or in the opposite direction (i.e. negative
change). Furthermore, if the correlation exists, it is linear, i.e. we can
represent the relative movement of the two variables by drawing a straight
line on graph paper.
Correlation Coefficient
The correlation coefficient, r, is a summary measure that describes the
extent of the statistical relationship between two variables. The correlation
coefficient is scaled so that it is always between -1 and +1. When r is close
to 0 this means that there is little relationship between the variables and
the farther away from 0 r is, in either the positive or negative direction, the
greater the relationship between the two variables.
Types of Correlation
The scatter plot explains the correlation between the two attributes or
variables. It represents how closely the two variables are connected.
There can be three such situations to see the relation between the two
variables –
 Positive Correlation – when the values of the two variables move in
the same direction so that an increase/decrease in the value of one
variable is followed by an increase/decrease in the value of the other
variable.

For example-One example of positive correlation is the relationship


between employment and inflation. High levels of employment
require employers to offer higher salaries in order to attract new
workers, and higher prices for their products in order to fund those
higher salaries. Conversely, periods of high unemployment

35
experience falling consumer demand, resulting in downward
pressure on prices and inflation.
 Negative Correlation – when the values of the two variables move
in the opposite direction so that an increase/decrease in the value
of one variable is followed by decrease/increase in the value of the
other variable.

For example-Examples of negative correlation are common in the


investment world. A well-known example is the negative correlation
between crude oil prices and airline stock prices. Jet fuel, which is
derived from crude oil, is a large cost input for airlines and has a
significant impact on their profitability and earnings.
If the price of crude oil spikes up, it could have a negative impact on
airlines' earnings and hence on the price of their stocks. But if the
price of crude oil trends lower, this should boost airline profits and
therefore their stock prices.

 No Correlation – when there is no linear dependence or no relation


between the two variables.

36
Correlation Formula
Correlation shows the relation between two variables. Correlation
coefficient shows the measure of correlation. To compare two
datasets, we use the correlation formulas.
Pearson Correlation Coefficient Formula
The most common formula is the Pearson Correlation coefficient
used for linear dependency between the data sets. The value of
the coefficient lies between -1 to +1. When the coefficient comes
down to zero, then the data is considered as not related. While, if
we get the value of +1, then the data are positively correlated, and
-1 has a negative correlation.

37
Where n = Quantity of Information
Σx = Total of the First Variable Value
Σy = Total of the Second Variable Value
Σxy = Sum of the Product of first & Second Value
Σx = Sum of the Squares of the First Value
2

Σy 2
= Sum of the Squares of the Second Value
Linear Correlation Coefficient Formula
The formula for the linear correlation coefficient is given by;

When using the Pearson correlation coefficient formula, you’ll need to consider
whether you’re dealing with data from a sample or the whole population.The sample
and population formulas differ in their symbols and inputs. A sample correlation
coefficient is called r, while a population correlation coefficient is called rho, the Greek
letter ρ.
Sample Correlation Coefficient Formula-
The formula is given by:

Population Correlation Coefficient Formula


The population correlation coefficient between two random variables X,Y
with expected values and and standard deviations and is defined as:

Where E is the expected value, cov is the covariance and corr is


correlation coefficient.
38
Examples using Correlation Coefficient Formula
Example 1. Given the following population data. Find the Pearson
correlation coefficient between x and y for this data. (Take 1√7 as
0.378)

x 600 800 1000

y 1200 1000 2000

Solution:
To simplify the calculation, we divide both x and y by 100.

x/100 y/100 xi−¯x yi−¯y (xi−¯¯¯x) 2


(yi−¯¯¯y) 2
(xi−¯x)(yi−¯y)

6 12 -2 -2 4 4 4

8 10 0 -4 0 16 0

10 20 2 6 4 36 12

¯x=8 ¯y=14 Σ(xi−¯x) =8


2
Σ(yi−¯y) =56
2
Σ(xi−¯x)(yi−¯y)=16

Using the correlation coefficient formula,

39
Pearson correlation coefficient for population =

r = 0.756

Answer: Pearson correlation coefficient = 0.756

Example 2. A survey was conducted in your city. Given is the following sample data

containing a person's age and their corresponding income. Find out whether the

increase in age has an effect on income using the correlation coefficient formula. (Use

1√181 as 0.074 and 1√2091 as 0.07)

Age 25 30 36 43

Income 30000 44000 52000 70000

Solution:
To simplify the calculation, we divide y by 1000.

Age Income/1 xi−¯ yi−¯ (xi−¯¯¯x) 2


(yi−¯¯¯y)2
(xi−¯x)(yi−¯y)
(x )
i 000 x y
(y /1000)
i

25 30 - -19 72.25 361 161.5


8.5

40
30 44 - -5 12.25 25 17.5
3.5

36 52 2.5 3 6.25 9 7.5

43 70 9.5 21 90.25 441 199.5

¯x=33 ¯y=49 Σ(xi−¯x) = 2


Σ(yi−¯y) = 2
Σ(xi−¯x)(yi−¯y)
.5 181 836 =386

Pearson correlation coefficient for sample

Therefore r=0.9923

Answer: Yes, with the increase in age a person's income

increases as well, since the Pearson correlation coefficient

between age and income is very close to 1.

Example 3: Calculate the Correlation coefficient of given data

x 41 42 43 44 45

41
y 3.2 3.3 3.4 3.5 3.6

Solution:

Here n = 5

Let us find ∑x , ∑y, ∑xy, ∑x , ∑y 2 2

x y xy x2
y2

41 3.2 131.2 1681 10.24

42 3.3 138.6 1764 10.89

43 3.4 146.2 1849 11.56

44 3.5 154 1936 12.25

45 3.6 162 2025 12.96

∑x = 215 ∑y = 17 ∑xy = 732 ∑x = 9255


2
∑y = 57.9
2

X values:
∑x = 215
∑x = 9255
2

x̄ = 43
∑(x - x̄) =10
2

Y values:
42
∑y = 17
∑y = 57.9
2

∑(y - ȳ) = 0.1
2

X and Y combined
N=5
∑((x - x̄)(y - ȳ)) = 1
∑xy = 732
R calculation:
r = = 1/√((10)(0.1)) = 1
Since r = 1, this indicates a significant relation between x and y.
Regression Analysis
Regression analysis refers to assessing the relationship between
the outcome variable and one or more variables. The outcome
variable is known as the dependent and co-founders are
known independent variables. The dependent variable is shown
by “y” and independent variables are shown by “x” in regression
analysis.
Linear Regression
Linear regression is a linear approach to modelling the
relationship between the scalar components and one or more
independent variables. If the regression has one independent
variable, then it is known as a simple linear regression. If it has
more than one independent variable, then it is known as multiple
linear regression.

· A regression is a statistical technique that relates a dependent


variable to one or more independent (explanatory) variables.
· A regression model is able to show whether changes observed
in the dependent variable are associated with changes in one
or more of the explanatory variables.
· It does this by essentially fitting a best-fit line and seeing how
the data is dispersed around this line.

43
· Regression helps economists and financial analysts in things
ranging from asset valuation to making predictions.
· In order for regression results to be properly interpreted, several
assumptions about the data and the model itself must hold.

Formula for linear regression equation is given by:


y=a+bx
a and b are given by the following formulas:

Where,
x and y are two variables on the regression line.
b = Slope of the line.
a = y-intercept of the line.
x = Values of the first data set.
y = Values of the second data set.

Solved Examples
Question: Find linear regression equation for the following two
sets of data:

x 2 4 6 8

y 3 7 5 10

Solution:

44
x y x2
xy

2 3 4 6

4 7 16 28

6 5 36 30

8 10 64 80

∑x= 20 ∑y= 25 ∑x = 120 ∑xy= 144


2

b=
b=0.95

a=
a = 1.5
Linear regression is given by:
y = a + bx
y = 1.5 + 0.95 x
Correlation and Regression Differences

45
There are some differences between Correlation and regression.

 Correlation shows the quantity of the degree to which two variables are
associated. It does not fix a line through the data points. You compute a
correlation that shows how much one variable changes when the other remains
constant.
 Linear regression finds the best line that predicts y from x, but Correlation does
not fit a line.
 Correlation is used when you measure both variables, while linear regression is
mostly applied when x is a variable that is manipulated.

Comparison Between Correlation and Regression

Basis Correlation Regression

Meaning A statistical measure that Describes how an


defines co-relationship or independent variable is
association of two variables. associated with the
dependent variable.

46
Dependent and No difference Both variables are different.
Independent
variables

Usage To describe a linear To fit the best line and


relationship between two estimate one variable based
variables. on another variable.

Objective To find a value expressing the To estimate values of a


relationship between random variable based on the
variables. values of a fixed variable.

References

 https://towardsdatascience.com
 https://www.geeksforgeeks.org
 https://www.javatpoint.com
 https://www.simplilearn.com

Introduction to Linear Algebra and statistics for AI :


 Basic matrix operations like matrix addition , subtraction , multiplication ,
transpose of matrix , identity matrix.---refer to any standard Maths text book
of Class XI,XII.
 Brief introduction to vectors , unit vector , normal vector , Euclidean space.--
-refer to any standard Maths text book of Class XI,XII

 Probability distribution , frequency , mean , median and mode , variance and


standard deviation , Gaussian distribution.------refer to any standard
Maths text book of Class XI,XII

47

You might also like