0% found this document useful (0 votes)
12 views

u2

The document discusses problem-solving agents in Artificial Intelligence, focusing on search algorithms and their components, including initial state, actions, transition model, goal test, and path cost. It outlines various types of problems, including toy problems and real-world applications like route-finding and robot navigation. Additionally, it introduces uninformed search strategies, emphasizing the importance of search techniques in AI for solving specific problems effectively.

Uploaded by

mrunalawate0tv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

u2

The document discusses problem-solving agents in Artificial Intelligence, focusing on search algorithms and their components, including initial state, actions, transition model, goal test, and path cost. It outlines various types of problems, including toy problems and real-world applications like route-finding and robot navigation. Additionally, it introduces uninformed search strategies, emphasizing the importance of search techniques in AI for solving specific problems effectively.

Uploaded by

mrunalawate0tv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 97

Artificial Intelligence

By
Ms. Kalyani Akhade
CE Dept.
Unit-2: Problem-Solving

➢ Problem-Solving Agents
➢ Example Problems
➢ Search Algorithms
➢ Uninformed Search Strategies
➢ Informed (Heuristic) Search Strategies
➢ Heuristic Functions
➢ Search in Complex Environments
➢ Local Search and Optimization Problems

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 2


Problems Solving Agents
➢ Search algorithms are one of the most important areas of Artificial
Intelligence.
➢ In Artificial Intelligence, Search techniques are universal problem-solving
methods
➢ Problem-solving agents in AI mostly used these search strategies or
algorithms to solve a specific problem and provide the best result.
➢ Problem-solving agents are the goal-based agents and use atomic
representation.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 3


Problems Solving Agents
Well-defined problems and solutions
➢ A problem can be defined formally by five components:
➢ Initial State
➢ Actions
➢ Transition Model
➢ Goal Test
➢ Path Cost

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 4


Problems Solving Agents
Well-defined problems and solutions
➢ A problem can be defined formally by five components:
➢ Initial State: The initial state that the agent starts in.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 5


Problems Solving Agents
Well-defined problems and solutions
➢ A problem can be defined formally by five components:
➢ Actions: A description of the possible actions available to the agent.
Given a particular state s,
➢ ACTIONS(s) returns the set of actions that can be executed in s.
➢ We say that each of these actions is applicable in s.
➢ For example, from the state In(Arad), the applicable actions are
{Go(Sibiu), Go(Timisoara), Go(Zerind)}.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 6


Problems Solving Agents
Well-defined problems and solutions
➢ A problem can be defined formally by five components:
➢ Transition Model: A description of what each action does; the formal
name for this is the transition model, specified by a function
RESULT(s, a) that returns the state that results from doing action a in
state s.
➢ We also use the term successor to refer to any state reachable from a
given state by a single action.
➢ For example, we have RESULT(In(Arad),Go(Zerind)) = In(Zerind) .
➢ Together, the initial state, actions, and transition model implicitly define
the state space of the problem
➢ The state space forms a directed network or graph in which the nodes
are states and the links between nodes are actions.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 7
Problems Solving Agents
Well-defined problems and solutions
➢ A problem can be defined formally by five components:
➢ Goal Test: The goal test, which determines whether a given state is a
goal state
➢ Sometimes there is an explicit set of possible goal states, and the test
simply checks whether the given state is one of them.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 8


Problems Solving Agents
Well-defined problems and solutions
➢ A problem can be defined formally by five components:
➢ Path Cost: A path cost function that assigns a numeric cost to each
path
➢ The problem-solving agent chooses a cost function that reflects its
own performance measure.
➢ The step cost of taking action a in state s to reach state s is denoted
by c(s, a, s’ )

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 9


Problems Solving Agents
Well-defined problems and solutions
➢ A problem can be defined formally by five components:
➢ Initial State
➢ Actions
➢ Transition Model
➢ Goal Test
➢ Path Cost
➢ All above elements define a problem and can be gathered into a single data
structure that is given as input to a problem-solving algorithm
➢ A solution to a problem is an action sequence that leads from the initial state to a
goal state.
➢ Solution quality is measured by the path cost function, and an optimal solution has
the lowest path cost among all solutions
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 10
Problems Solving Agents
Well-defined problems and solutions

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 11


Problems Solving Agents
➢ Search: Searching is a step by step procedure to solve a search-problem
in a given search space.
A search problem can have three main factors:
➢ Search Space: Search space represents a set of possible solutions,
which a system may have.
➢ Start State: It is a state from where agent begins the search.
➢ Goal test: It is a function which observe the current state and returns
whether the goal state is achieved or not.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 12


Problems Solving Agents
➢ Search tree: A tree representation of search problem is called Search tree.
The root of the search tree is the root node which is corresponding to the
initial state.
➢ Actions: It gives the description of all the available actions to the agent.
➢ Transition model: A description of what each action do, can be
represented as a transition model.
➢ Path Cost: It is a function which assigns a numeric cost to each path.
➢ Solution: It is an action sequence which leads from the start node to the
goal node.
➢ Optimal Solution: If a solution has the lowest cost among all solutions.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 13


Problems Solving Agents
➢ Example Problems
➢ The problem-solving approach has been applied to a vast array of task
environments.
➢ A toy problem is intended to illustrate or exercise various problem-
solving methods. It can be given a concise, exact description and hence is
usable by different researchers to compare the performance of algorithms
➢ A real-world problem is one whose solutions people actually care about.
Such problems tend not to have a single agreed-upon description, but we
can give the general flavor of their formulations

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 14


Problems Solving Agents
➢ Toy Problems

The state space for the vacuum world. Links denote actions: L = Left, R = Right, S = Suck.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 15
Problems Solving Agents
➢ Toy Problems
➢ States: The state is determined by both the agent location and the dirt
locations. The agent is in one of two locations, each of which might or might
not contain dirt. Thus, there are 2 × 2^2 = 8 possible world states. A larger
environment with n locations has n x 2^n states.
➢ Initial state: Any state can be designated as the initial state.
➢ Actions: In this simple environment, each state has just three actions: Left,
Right, and Suck. Larger environments might also include Up and Down.
➢ Transition model: The actions have their expected effects
➢ Goal test: This checks whether all the squares are clean.
➢ Path cost: Each step costs 1, so the path cost is the number of steps in the
path.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 16
Problems Solving Agents
➢ Toy Problems

A typical instance of the 8-puzzle.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 17


Problems Solving Agents
Toy Problems
➢ States: A state description specifies the location of each of the eight tiles and the
blank in one of the nine squares.
➢ Initial state: Any state can be designated as the initial state. Note that any given
goal can be reached from exactly half of the possible initial states
➢ Actions: The simplest formulation defines the actions as movements of the
blank space Left, Right, Up, or Down. Different subsets of these are possible
depending on where the blank is.
➢ Transition model: Given a state and action, this returns the resulting state; for
example, if we apply Left to the start state in Figure, the resulting state has the 5
and the blank switched.
➢ Goal test: This checks whether the state matches the goal configuration shown
in Figure (Other goal configurations are possible.)
➢ Path cost: Each step costs 1, so the path cost is the number of steps in the path.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 18
Problems Solving Agents
Real-World Problems
➢ route-finding problem
➢ States: Each state obviously includes a location (e.g., an airport) and the current time.
Furthermore, because the cost of an action (a flight segment) may depend on previous
segments, their fare bases, and their status as domestic or international, the state must
record extra information about these “historical” aspects.
➢ Initial state: This is specified by the user’s query.
➢ Actions: Take any flight from the current location, in any seat class, leaving after the
current time, leaving enough time for within-airport transfer if needed.
➢ Transition model: The state resulting from taking a flight will have the flight’s destination
as the current location and the flight’s arrival time as the current time.
➢ Goal test: Are we at the final destination specified by the user?
➢ Path cost: This depends on monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane, frequent-flyer mileage
awards, and so on.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 19
Problems Solving Agents
Real-World Problems
➢ Touring problems - closely related to route-finding problems
➢ example, the problem “Visit every city in Figure at least once, starting and
ending in Bucharest.”
➢ the actions correspond to trips between adjacent cities
➢ Each state must include not just the current location but also the set of cities
the agent has visited
➢ So the initial state would be In(Bucharest ), Visited({Bucharest})
➢ a typical intermediate state would be In(Vaslui ), Visited({Bucharest ,
Urziceni , Vaslui})
➢ the goal test would check whether the agent is in Bucharest and all 20 cities
have been visited.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 20


Problems Solving Agents
Real-World Problems

➢ The traveling salesperson problem (TSP) is a touring problem in which


each city must be visited exactly once.
➢ The aim is to find the shortest tour
➢ The problem is known to be NP-hard, but an enormous amount of effort
has been expended to improve the capabilities of TSP algorithms
➢ In addition to planning trips for traveling salespersons, these algorithms
have been used for tasks such as planning movements of automatic circuit-
board drills and of stocking machines on shop floors.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 21


Problems Solving Agents
Real-World Problems
➢ A VLSI layout problem requires positioning millions of components and connections on
a chip to minimize area, minimize circuit delays, minimize stray capacitances, and
maximize manufacturing yield
➢ The layout problem comes after the logical design phase and is usually split into two
parts: cell layout and channel routing
➢ In cell layout, the primitive components of the circuit are grouped into cells, each of
which performs some recognized function.
➢ Each cell has a fixed footprint (size and shape) and requires a certain number of
connections to each of the other cells.
➢ The aim is to place the cells on the chip so that they do not overlap and so that there is
room for the connecting wires to be placed between the cells.
➢ Channel routing finds a specific route for each wire through the gaps between the cells.
➢ These search problems are extremely complex, but definitely worth solving some
algorithms capable of solving
Artificial Intelligence
them.
Prof. Kalyani Akhade (CE DEPT) 1/31/2025 22
Problems Solving Agents
Real-World Problems

➢ Robot navigation- is a generalization of the route-finding problem


➢ Rather than following a discrete set of routes, a robot can move in a
continuous space with an infinite set of possible actions and states
➢ For a circular robot moving on a flat surface, the space is essentially two-
dimensional.
➢ When the robot has arms and legs or wheels that must also be controlled, the
search space becomes many-dimensional
➢ In addition to the complexity of the problem, real robots must also deal with
errors in their sensor readings and motor controls.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 23


Problems Solving Agents
Real-World Problems

➢ Automatic assembly sequencing of complex objects by a robot was first


demonstrated by FREDDY (Michie, 1972).
➢ In assembly problems, the aim is to find an order in which to assemble the
parts of some object.
➢ If the wrong order is chosen, there will be no way to add some part later in
the sequence without undoing some of the work already done
➢ Checking a step in the sequence for feasibility is a difficult geometrical
search problem closely related to robot navigation
➢ Another important assembly problem is protein design in which the goal to
find a sequence of amino acids that will fold into a three-dimensional
protein with the right properties to cure some disease.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 24
Search Strategies
Uninformed Search Strategies
➢ Uninformed search also called blind search
➢ The strategies have no additional information about states beyond that
provided in the problem definition
➢ All they can do is generate successors and distinguish a goal state from a
non-goal state
➢ All search strategies are distinguished by the order in which nodes are
expanded

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 25


Search Strategies
Uninformed Search Strategies
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 26


Search Strategies
Uninformed Search Strategies
Breadth-first Search
➢ Breadth-first search is a simple strategy in which the root node is expanded
first, then all the successors of the root node are expanded next, then their
successors, and so on.
➢ The breadth-first search algorithm is an example of a general-graph search
algorithm.
➢ Breadth-first search implemented using FIFO queue data structure.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 27


Search Strategies
Uninformed Search Strategies
Breadth-first Search
BFS = S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 28


Search Strategies
Uninformed Search Strategies
Breadth-first Search
➢ Time Complexity: Time Complexity of BFS algorithm can be obtained
by the number of nodes traversed in BFS until the shallowest Node.
Where the d= depth of shallowest solution and b is a node at every state.
➢ T (b) = 1+b2+b3+.......+ bd= O (bd)
➢ Space Complexity: Space complexity of BFS algorithm is given by the
Memory size of frontier(The set of all leaf nodes available for expansion
at any given point is called the frontier) which is O(bd).
➢ Completeness: BFS is complete, which means if the shallowest goal
node is at some finite depth, then BFS will find a solution.
➢ Optimality: BFS is optimal if path cost is a non-decreasing function of
the depth of the node.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 29
Search Strategies
Uninformed Search Strategies
Uniform-cost Search
➢ Uniform-cost search is a searching algorithm used for traversing a weighted tree
or graph
➢ This algorithm comes into play when a different cost is available for each edge.
➢ The primary goal of the uniform-cost search is to find a path to the goal node
which has the lowest cumulative cost
➢ Uniform-cost search expands nodes according to their path costs form the root
node.
➢ It can be used to solve any graph/tree where the optimal cost is in demand
➢ A uniform-cost search algorithm is implemented by the priority queue
➢ Uniform cost search is equivalent to BFS algorithm if the path cost of all edges
is the same.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 30
Search Strategies
Uninformed Search Strategies
Uniform-cost Search
➢ The problem is to get from Sibiu to Bucharest.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 31


Search Strategies
Uninformed Search Strategies
Uniform-cost Search
➢ The problem is to get from Sibiu to Bucharest.

Graph Details:
• Nodes: Sibiu, Rimnicu Vilcea, Fagaras, Pitesti, Bucharest.
• Edge Costs:
• Sibiu → Rimnicu Vilcea: 80
• Sibiu → Fagaras: 99
• Rimnicu Vilcea → Pitesti: 97
• Rimnicu Vilcea → Bucharest: Not connected
• Fagaras → Bucharest: 211
• Pitesti → Bucharest: 101

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 32


Search Strategies
Uninformed Search Strategies
Uniform-cost Search
➢ The problem is to get from Sibiu to Bucharest.

Step 1: Initialize Start at Sibiu with a cost of 0.


Priority Queue: [(0, Sibiu)]

Step 2: Expand Sibiu


Expand Sibiu:Sibiu → Rimnicu Vilcea (cost = 0 + 80 = 80)
Sibiu → Fagaras (cost = 0 + 99 = 99)
Priority Queue: [(80, Rimnicu Vilcea), (99, Fagaras)]

Step 3: Expand Rimnicu Vilcea


Expand Rimnicu Vilcea (lowest cost = 80):Rimnicu Vilcea → Pitesti (cost = 80 + 97 = 177)
Priority Queue: [(99, Fagaras), (177, Pitesti)]

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 33


Search Strategies
Uninformed Search Strategies
Uniform-cost Search
➢ The problem is to get from Sibiu to Bucharest.
Step 4: Expand Fagaras
Expand Fagaras (lowest cost = 99):Fagaras → Bucharest (cost = 99 + 211 = 310)
Priority Queue: [(177, Pitesti), (310, Bucharest)]

Step 5: Expand Pitesti


Expand Pitesti (lowest cost = 177):Pitesti → Bucharest (cost = 177 + 101 = 278)
Priority Queue: [(278, Bucharest), (310, Bucharest)]

Step 6: Expand Bucharest


Expand Bucharest (lowest cost = 278).
Goal reached with cost 278.

Final Path: Sibiu → Rimnicu Vilcea → Pitesti → Bucharest Total Cost: 278
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 34
Search Strategies
Uninformed Search Strategies
Uniform-cost Search
Advantages:
➢ Uniform cost search is optimal because at every state the path with the least cost
is chosen.
Disadvantages:
➢ It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an
infinite loop.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 35


Search Strategies
Uninformed Search Strategies
Uniform-cost Search
Completeness: Uniform-cost search is complete, such as if there is a solution, UCS
will find it.
Time Complexity: Let C* is Cost of the optimal solution, and ε is each step to get
closer to the goal node. Then the number of steps is = C*/ε+1. Here we have taken
+1, as we start from state 0 and end to C*/ε.
Hence, the worst-case time complexity of Uniform-cost search is O(b1 + [C*/ε])/.
Space Complexity: The same logic is for space complexity so, the worst-case
space complexity of Uniform-cost search is O(b1 + [C*/ε]).

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 36


Search Strategies
Uninformed Search Strategies
Depth-first Search
➢ Depth-first search is a recursive algorithm for traversing a tree or graph
data structure.
➢ It is called the depth-first search because it starts from the root node and
follows each path to its greatest depth node before moving to the next
path.
➢ DFS uses a stack data structure for its implementation.
➢ The process of the DFS algorithm is similar to the BFS algorithm.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 37


Search Strategies
Uninformed Search Strategies
➢ Depth-first Search

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 38


Search Strategies
Uninformed Search Strategies
Depth-first Search
➢ Completeness: DFS search algorithm is complete within finite state space
as it will expand every node within a limited search tree.
➢ Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by: T(b)= 1+ b2+ b3 +.........+
bm=O(bm) Where, m= maximum depth of any node and this can be
much larger than d (Shallowest solution depth)
➢ Space Complexity: DFS algorithm needs to store only single path from the
root node, hence space complexity of DFS is equivalent to the size of the
fringe set, which is O(bm).
➢ Optimal: DFS search algorithm is non-optimal, as it may generate a large
number of steps or high cost to reach to the goal node.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 39


Search Strategies
Uninformed Search Strategies
Depth-limited Search
➢ A depth-limited search algorithm is similar to depth-first search with a
predetermined limit.
➢ Depth-limited search can solve the drawback of the infinite path in the
Depth-first search.
➢ In this algorithm, the node at the depth limit will treat as it has no successor
nodes further.
Depth-limited search can be terminated with two Conditions of failure:
➢ Standard failure value: It indicates that problem does not have any solution.
➢ Cutoff failure value: It defines no solution for the problem within a given
depth limit.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 40


Search Strategies
Uninformed Search Strategies
Depth-limited Search

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 41


Search Strategies
Uninformed Search Strategies
Depth-limited Search
➢ Completeness: DLS search algorithm is complete if the solution is above
the depth-limit.
➢ Time Complexity: Time complexity of DLS algorithm is O(bℓ).
➢ Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
Advantages:
➢ Depth-limited search is Memory efficient.
Disadvantages:
➢ Depth-limited search also has a disadvantage of incompleteness.
➢ It may not be optimal if the problem has more than one solution.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 42


Search Strategies
Uninformed Search Strategies
Iterative deepening Depth-First Search
➢ The iterative deepening algorithm is a combination of DFS and BFS
algorithms.
➢ This search algorithm finds out the best depth limit and does it by gradually
increasing the limit until a goal is found.
➢ This algorithm performs depth-first search up to a certain "depth limit", and
it keeps increasing the depth limit after each iteration until the goal node is
found.
➢ This Search algorithm combines the benefits of Breadth-first search's fast
search and depth-first search's memory efficiency.
➢ The iterative search algorithm is useful uninformed search when search
space is large, and depth of goal node is unknown.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 43
Search Strategies
Uninformed Search Strategies
Iterative deepening Depth-First Search

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 44


Search Strategies
Uninformed Search Strategies
Iterative deepening Depth-First Search
Advantages:
➢ It combines the benefits of BFS and DFS search algorithm in terms of fast
search and memory efficiency.
Disadvantages:
➢ The main drawback of IDDFS is that it repeats all the work of the previous
phase.
Completeness: This algorithm is complete is if the branching factor is finite.
Time Complexity: Let's suppose b is the branching factor and depth is d then
the worst-case time complexity is O(bd).
Space Complexity: The space complexity of IDDFS will be O(bd).

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 45


Search Strategies
Uninformed Search Strategies
Bidirectional Search
➢ Bidirectional search algorithm runs two simultaneous searches, one form
initial state called as forward-search and other from goal node called as
backward-search, to find the goal node.
➢ Bidirectional search replaces one single search graph with two small
subgraphs in which one starts the search from an initial vertex and other
starts from goal vertex
➢ The search stops when these two graphs intersect each other
➢ Bidirectional search can use search techniques such as BFS, DFS

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 46


Search Strategies
Uninformed Search Strategies
Bidirectional Search

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 47


Search Strategies
Uninformed Search Strategies
Bidirectional Search
1. Initialization
The algorithm initializes two frontiers:
Forward Frontier: Starts with the root node (1).
Backward Frontier: Starts with the goal node (16).

2. Forward Search Expansion


The forward search begins at node 1 and explores its neighbors:
From 1, it expands to nodes 4, 2.
The next step expands node 4, which connects to node 8.

3. Backward Search Expansion


The backward search begins at node 16 and explores its neighbors:
From 16, it expands to nodes 12 and 15.
The next step expands node 12, which connects to node 10.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 48
Search Strategies
Uninformed Search Strategies
Bidirectional Search
4. Meeting at Intersection Node
The forward search reaches node 8, and the backward search reaches node 10.
Both searches eventually expand to node 9, which becomes the intersection node.

5. Path Construction
Once the two searches meet at node 9, the algorithm reconstructs the path:
Forward Path: From 1 → 4 → 8 → 9.
Backward Path: From 9 → 10 → 12 → 16.
Combining these paths gives the full solution:
1→4→8→9→10→12→161→4→8→9→10→12→16

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 49


Search Strategies
Uninformed Search Strategies
Bidirectional Search
Advantages:
➢ Bidirectional search is fast.
➢ Bidirectional search requires less memory
Disadvantages:
➢ Implementation of the bidirectional search tree is difficult.
➢ In bidirectional search, one should know the goal state in advance.
Completeness: Bidirectional Search is complete if we use BFS in both searches.
Time Complexity: Time complexity of bidirectional search using BFS is O(bd).
Space Complexity: Space complexity of bidirectional search is O(bd).

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 50


Search Strategies
Informed Search Strategies
➢ Informed search algorithm contains an array of knowledge such as how far we are from
the goal, path cost, how to reach to goal node, etc. This knowledge help agents to
explore less to the search space and find more efficiently the goal node
➢ The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristic, so it is also called Heuristic search
➢ The general approach we consider is called best-first search
➢ Best-first search is an instance of the general TREE-SEARCH or GRAPH-SEARCH
algorithm in which a node is selected for expansion based on an evaluation function,
f(n).
➢ The evaluation function is construed as a cost estimate, so the node with the lowest
evaluation is expanded first
➢ Most best-first algorithms include as a component of f a heuristic function, denoted
h(n):h(n) = estimated cost of the cheapest path from the state at node n to a goal state.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 51
Search Strategies

Informed Search Strategies

➢ Best First Search Algorithm(Greedy search)


➢ A* Search Algorithm

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 52


Search Strategies
Informed Search Strategies
Best First Search Algorithm(Greedy search)
➢ Greedy best-first search algorithm always selects the path which appears best
at that moment
➢ It is the combination of depth-first search and breadth-first search algorithms
➢ It uses the heuristic function and search
➢ Best-first search allows us to take the advantages of both algorithms
➢ With the help of best-first search, at each step, we can choose the most
promising node
➢ In the best first search algorithm, we expand the node which is closest to the
goal node and the closest cost is estimated by heuristic function, i.e.
f(n)= h(n).

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 53


Search Strategies
Informed Search Strategies
Best First Search Algorithm(Greedy search)
➢ Greedy best-first search tries to expand the node that is closest to the
goal, on the grounds that this is likely to lead to a solution quickly.
➢ Thus, it evaluates nodes by using just the heuristic function; that is, f(n) =
h(n).
➢ Example: Let us see how this works for route-finding problems in
Romania
➢ we use the straight line distance heuristic, which we will call hSLD
➢ If the goal is Bucharest, we need to know the straight-line distances to
Bucharest
➢ For example, hSLD(In(Arad))=366.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 54


Search Strategies
Informed Search Strategies
Best First Search Algorithm(Greedy search)

Figure. Values of hSLD—straight-line distances to Bucharest.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 55


Search Strategies
Informed Search Strategies
Best First Search Algorithm(Greedy search)

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 56


Search Strategies
Informed Search Strategies
Best First Search Algorithm(Greedy search)

Figure. Stages in a greedy best-first tree search for Bucharest with the straight-line
distance heuristic hSLD. Nodes are labeled with their h-values.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 57


Search Strategies
Informed Search Strategies
Best First Search Algorithm(Greedy search)
➢ Advantages:
➢ Best first search can switch between BFS and DFS by gaining the
advantages of both the algorithms.
➢ This algorithm is more efficient than BFS and DFS algorithms.
➢ Disadvantages:
➢ It can behave as an unguided depth-first search in the worst case
scenario.
➢ It can get stuck in a loop as DFS.
➢ This algorithm is not optimal.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 58


Search Strategies
Informed Search Strategies
Best First Search Algorithm(Greedy search)
➢ Time Complexity: The worst case time complexity of Greedy best first
search is O(bm).
➢ Space Complexity: The worst case space complexity of Greedy best first
search is O(bm). Where, m is the maximum depth of the search space.
➢ Complete: Greedy best-first search is also incomplete, even if the given
state space is finite.
➢ Optimal: Greedy best first search algorithm is not optimal.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 59


Search Strategies
Informed Search Strategies
A* Search
➢ It evaluates nodes by combining g(n), the cost to reach the node, and h(n),
the cost to get from the node to the goal:
➢ f(n) = g(n) + h(n) .
➢ g(n) gives the path cost from the start node to node n
➢ h(n) is the estimated cost of the cheapest path from n to the goal
➢ f(n) = estimated cost of the cheapest solution through n .
➢ If we are trying to find the cheapest solution, a reasonable thing to try first
is the node with the lowest value of g(n) + h(n).

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 60


Search Strategies
Informed Search Strategies
A* Search

Figure. Values of g(n)—Real distances to Bucharest.


Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 61
Search Strategies
Informed Search Strategies
A* Search

Figure. Values of hSLD—straight-line distances to Bucharest.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 62


Search Strategies
Informed Search Strategies
A* Search

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 63


Search Strategies
Informed Search Strategies
A* Search

Figure Stages in an A∗ search for Bucharest. Nodes are labeled with f = g +h.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 64
Search Strategies
Informed Search Strategies
A* Search
➢ Advantages
➢ A* search algorithm is the best algorithm than other search algorithms.
➢ A* search algorithm is optimal and complete.
➢ This algorithm can solve very complex problems.
➢ Disadvantages:
➢ It does not always produce the shortest path as it mostly based on
heuristics and approximation.
➢ A* search algorithm has some complexity issues.
➢ The main drawback of A* is memory requirement as it keeps all generated
nodes in the memory, so it is not practical for various large-scale
problems.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 65
Search Strategies
Informed Search Strategies
A* Search
➢ Time Complexity: The time complexity of A* search algorithm depends on
heuristic function, and the number of nodes expanded is exponential to the
depth of solution d. So the time complexity is O(b^d), where b is the
branching factor.
➢ Space Complexity: The space complexity of A* search algorithm is O(b^d)
➢ Complete: A* algorithm is complete as long as:
➢ Branching factor is finite.
➢ Cost at every action is fixed.
➢ Optimal: If the heuristic function is admissible (never overestimates the
actual cost), then A* tree search will always find the least cost.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 66


Heuristic Functions
➢ A heuristic function is a mathematical tool used in search algorithms to
estimate the cost of reaching the goal from a given node.
➢ It guides the search process by providing an informed guess, helping the
algorithm prioritize nodes that are more likely to lead to the goal
efficiently.
Definition:
➢ A heuristic function, denoted as ℎ(𝑛), assigns a value to each node 𝑛,
representing an estimate of the cost to reach the goal node from 𝑛.
➢ This estimate does not necessarily need to be accurate but should be
computationally efficient to evaluate.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 67


Heuristic Functions
Properties of Heuristic Functions
➢ Admissibility:
A heuristic is admissible if it never overestimates the actual cost to reach the
goal.
h(n) ≤ h*(n) ∀n
Where ℎ*(𝑛) is the true cost to reach the goal from n.
➢ Consistency (or Monotonicity):
A heuristic is consistent if, for every node n and its neighbor n′ , the estimated
cost satisfies
h(n) ≤ c(n,n′ )+h(n′ )
Where c(n,n′ ) is the actual cost of moving from n to n′ .

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 68


Heuristic Functions
Types of Heuristic Functions
➢ Domain-Specific Heuristics:
Tailored for specific problems, leveraging knowledge of the problem's
structure.
Example: In the 8-puzzle problem, heuristics like the Manhattan
Distance or Misplaced Tiles.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 69


Heuristic Functions
➢ We look at heuristics for the 8-puzzle, in order to shed light on the
nature of heuristics in general.
➢ The 8-puzzle was one of the earliest heuristic search problems
➢ The objective of the puzzle is to slide the tiles horizontally or vertically
into the empty space until the configuration matches the goal
configuration
➢ The average solution cost for a randomly generated 8-puzzle instance is
about 22 steps. The branching factor is about 3.
➢ If we want to find the shortest solutions by using A∗, we need a heuristic
function that never overestimates the number of steps to the goal.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 70


Heuristic Functions

Figure A typical instance of the 8-puzzle. The solution is 26 steps long.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 71


Heuristic Functions
Here are two commonly used Methods:

1. Misplaced Tiles:
➢ h1 = the number of misplaced tiles. All of the eight tiles are out of position, so
the start state would have h1 = 8.
h1 is an admissible heuristic because it is clear that any tile that is out of
place must be moved at least once.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 72


Heuristic Functions

Here are two commonly used Methods:


2. Manhattan distance:
➢ h2 = the sum of the distances of the tiles from their goal positions. Because
tiles cannot move along diagonals, the distance we will count is the sum of
the horizontal and vertical distances. This is sometimes called the city block
distance or Manhattan distance.
➢ h2 is also admissible because all any move can do is move one tile one step
closer to the goal. Tiles 1 to 8 in the start state give a Manhattan distance of
h2 = 3+1 + 2 + 2+ 2 + 3+ 3 + 2 = 18 .
➢ As expected, neither of these overestimates the true solution cost, which is 26.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 73


Local Search and Optimization Problems
➢ The search algorithms that we have seen so far are designed to explore search
spaces systematically
➢ This systematicity is achieved by keeping one or more paths in memory and
by recording which alternatives have been explored at each point along the
path.
➢ When a goal is found, the path to that goal also constitutes a solution to the
problem.
➢ In many problems, however, the path to the goal is irrelevant
➢ For example, in the 8-queens problem what matters is the final configuration
of queens, not the order in which they are added
➢ The same general property holds for many important applications such as
integrated-circuit design, factory-floor layout, job-shop scheduling, automatic
programming, telecommunications network optimization, vehicle routing,
and portfolio management.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 74
Local Search and Optimization Problems
➢ Local search is a heuristic-based optimization technique used to solve
computational problems where the goal is to find an optimal or
satisfactory solution by iteratively improving a candidate solution.
➢ Instead of exploring the entire search space, local search focuses on
improving the solution by moving to neighboring states.
➢ Local search is particularly effective for optimization problems,
where the objective is to maximize or minimize a function.
➢ If the path to the goal does not matter, we might consider a different
class of algorithms, ones that do not worry about paths at all

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 75


Local Search and Optimization Problems
➢ Local search algorithms operate using a single current node (rather than
multiple paths) and generally move only to neighbors of that node.
➢ Typically, the paths followed by the search are not retained
➢ Although local search algorithms are not systematic, they have two key
advantages:
➢ They use very little memory—usually a constant amount
➢ They can often find reasonable solutions in large or infinite (continuous)
state spaces for which systematic algorithms are unsuitable.
➢ In addition to finding goals, local search algorithms are useful for solving
pure optimization problems, in which the aim is to find the best state
according to an objective function.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 76


Local Search and Optimization Problems

➢ Hill-climbing search
➢ Simulated annealing
➢ Local beam search
➢ Genetic algorithms

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 77


Local Search and Optimization Problems
Hill-climbing search
➢ Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best solution
to the problem. It terminates when it reaches a peak value where no neighbor has a
higher value.
➢ Hill climbing algorithm is a technique which is used for optimizing the mathematical
problems. One of the widely discussed examples of Hill climbing algorithm is
Traveling-salesman Problem in which we need to minimize the distance traveled by the
sales
➢ It is also called greedy local search as it only looks to its good immediate neighbor state
and not beyond that.
➢ A node of hill climbing algorithm has two components which are state and value.
➢ Hill Climbing is mostly used when a good heuristic is available.
➢ In this algorithm, we don't need to maintain and handle the search tree or graph as it
only keeps a single current state.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 78
Local Search and Optimization Problems
Hill-climbing search
Features of Hill Climbing:
➢ Generate and Test variant: Hill Climbing is the variant of Generate and Test
method. The Generate and Test method produce feedback which helps to decide
which direction to move in the search space.
➢ Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
➢ No backtracking: It does not backtrack the search space, as it does not
remember the previous states.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 79


Local Search and Optimization Problems
Hill-climbing search
➢ State-space Diagram for Hill Climbing:

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 80


Local Search and Optimization Problems
Hill-climbing search
➢ State-space Diagram for Hill Climbing:
➢ Local Maximum: Local maximum is a state which is better than its neighbor
states, but there is also another state which is higher than it.
➢ Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.
➢ Current state: It is a state in a landscape diagram where an agent is currently
present.
➢ Flat local maximum: It is a flat space in the landscape where all the neighbor
states of current states have the same value.
➢ Shoulder: It is a plateau region which has an uphill edge.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 81


Local Search and Optimization Problems
Types of Hill-climbing search
➢ Simple hill Climbing
➢ Steepest-Ascent hill-climbing
➢ Stochastic hill Climbing

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 82


Local Search and Optimization Problems
Types of Hill-climbing search
Simple hill Climbing:
➢ It only evaluates the neighbor node state at a time and selects the first one
which optimizes current cost and set it as a current state.
➢ It only checks it's one successor state, and if it finds better than the current state,
then move else be in the same state
This algorithm has the following features:
➢ Less time consuming
➢ Less optimal solution and the solution is not guaranteed

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 83


Local Search and Optimization Problems
Types of Hill-climbing search
Simple hill Climbing:
Algorithm for Simple Hill Climbing:
➢ Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
➢ Step 2: Loop Until a solution is found or there is no new operator left to apply.
➢ Step 3: Select and apply an operator to the current state.
➢ Step 4: Check new state:
➢ If it is goal state, then return success and quit.
➢ Else if it is better than the current state then assign new state as a current state.
➢ Else if not better than the current state, then return to step2.
➢ Step 5: Exit.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 84
Local Search and Optimization Problems
Types of Hill-climbing search
Steepest-Ascent hill Climbing:
➢ The steepest-Ascent algorithm is a variation of simple hill climbing
algorithm. This algorithm examines all the neighboring nodes of the
current state and selects one neighbor node which is closest to the goal
state.
➢ This algorithm consumes more time as it searches for multiple neighbors

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 85


Local Search and Optimization Problems
Types of Hill-climbing search
Algorithm Steepest-Ascent hill Climbing:
➢ Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make
current state as initial state.
➢ Step 2: Loop until a solution is found or the current state does not change.
➢ Let SUCC be a state such that any successor of the current state will be better than it.
➢ For each operator that applies to the current state:
➢ Apply the new operator and generate a new state.
➢ Evaluate the new state.
➢ If it is goal state, then return it and quit, else compare it to the SUCC.
➢ If it is better than SUCC, then set new state as SUCC.
➢ If the SUCC is better than the current state, then set current state to SUCC.
➢ Step 5: Exit.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 86
Local Search and Optimization Problems
Types of Hill-climbing search
Stochastic hill Climbing:
➢ Stochastic hill climbing does not examine for all its neighbor before
moving. Rather, this search algorithm selects one neighbor node at
random and decides whether to choose it as a current state or examine
another state.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 87


Local Search and Optimization Problems
Problems in Hill-climbing search
➢ Local Maximum: A local maximum is a peak state in the landscape
which is better than each of its neighboring states, but there is another
state also present which is higher than the local maximum.

➢ Solution: Backtracking technique can be a solution of the local


maximum in state space landscape. Create a list of the promising path
so that the algorithm can backtrack the search space and explore other
paths as well.
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 88
Local Search and Optimization Problems
Problems in Hill-climbing search
➢ Flat Maximum: A plateau is the flat area of the search space in which all
the neighbor states of the current state contains the same value, because of
this algorithm does not find any best direction to move. A hill-climbing
search might be lost in the plateau area.

➢ Solution: The solution for the plateau is to take big steps or very little steps
while searching, to solve the problem. Randomly select a state which is far
away from the current state so it is possible that the algorithm could find
non-plateau region
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 89
Local Search and Optimization Problems
Problems in Hill-climbing search
➢ Ridges: A ridge is a special form of the local maximum. It has an area
which is higher than its surrounding areas, but itself has a slope, and cannot
be reached in a single move.

➢ Solution: With the use of bidirectional search, or by moving in different


directions, we can improve this problem.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 90


Local Search and Optimization Problems
Simulated annealing
➢ Simulated Annealing is an algorithm which yields both efficiency and
completeness.
➢ The algorithm picks a random move, instead of picking the best move.
➢ If the random move improves the state, then it follows the same path.
➢ Otherwise, the algorithm follows the path which has a probability of
less than 1 or it moves downhill and chooses another path.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 91


Local Search and Optimization Problems
Local beam search
➢ The local beam search algorithm keeps track of k states rather than
just one
➢ It begins with k randomly generated states at each step, all the
successors of all k states are generated
➢ If any one is a goal, the algorithm halts. Otherwise, it selects the k
best successors from the complete list and repeats.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 92


Local Search and Optimization Problems
Stochastic beam search
➢ local beam search can suffer from a lack of diversity among the k states—
they can quickly become concentrated in a small region of the state space,
making the search little more than an expensive version of hill climbing.
➢ A variant called stochastic beam search, analogous to stochastic hill
climbing, helps alleviate this problem
➢ Instead of choosing the best k from the pool of candidate successors,
stochastic beam search chooses k successors at random, with the probability
of choosing a given successor being an increasing function of its value.
➢ Stochastic beam search bears some resemblance to the process of natural
selection, whereby the “successors” (offspring) of a “state” (organism)
populate the next generation according to its “value” (fitness).

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 93


Local Search and Optimization Problems
Genetic algorithms
➢ A genetic algorithm (or GA) is a variant of stochastic beam search in which
successor states are generated by combining two parent states rather than by
modifying a single state.

Figure The genetic algorithm, illustrated for digit strings representing 8-queens states.
The initial population in (a) is ranked by the fitness function in (b), resulting in pairs for
mating in (c). They produce offspring in (d), which are subject to mutation in (e).
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 94
Local Search and Optimization Problems
Genetic algorithms

Figure The 8-queens states corresponding to the first two parents in Figure (c) and
the first offspring in Figure(d). The shaded columns are lost in the crossover step
and the unshaded columns are retained.

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 95


Local Search and Optimization Problems
Genetic algorithms
➢ GAs begin with a set of k randomly generated states, called the population.
➢ The production of the next generation of states is shown in Figure (b)–(e). In
(b), each state is rated by the objective function, or (in GA terminology) the
fitness function
➢ A fitness function should return higher values for better states
➢ In (c), two pairs are selected at random for reproduction, in accordance with the
probabilities in (b).
➢ For each pair to be mated, a crossover point is chosen randomly from the
positions in the string
➢ In (d), the offspring themselves are created by crossing over the parent strings
at the crossover point
Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 96
Local Search and Optimization Problems
Genetic algorithms
➢ For example, the first child of the first pair gets the first three digits from
the first parent and the remaining digits from the second parent, whereas
the second child gets the first three digits from the second parent and the
rest from the first parent
➢ Finally, in (e), each location is subject to random mutation with a small
independent probability
➢ One digit was mutated in the first, third, and fourth offspring
➢ In the 8-queens problem, this corresponds to choosing a queen at random
and moving it to a random square in its column

Artificial Intelligence Prof. Kalyani Akhade (CE DEPT) 1/31/2025 97

You might also like