0% found this document useful (0 votes)
2 views

Informed Search algorithm

Informed search algorithms utilize heuristics to prioritize paths likely to lead to optimal solutions, reducing search space and enhancing performance. Key algorithms include Greedy Best-First Search, which expands nodes based on proximity to the goal, and A* Search, which combines path cost and heuristic estimates for optimality. Memory-bounded heuristic search methods, such as Iterative Deepening A and Recursive Best-First Search, manage memory constraints while exploring large search spaces, often at the cost of optimality or completeness.

Uploaded by

elfahsym
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Informed Search algorithm

Informed search algorithms utilize heuristics to prioritize paths likely to lead to optimal solutions, reducing search space and enhancing performance. Key algorithms include Greedy Best-First Search, which expands nodes based on proximity to the goal, and A* Search, which combines path cost and heuristic estimates for optimality. Memory-bounded heuristic search methods, such as Iterative Deepening A and Recursive Best-First Search, manage memory constraints while exploring large search spaces, often at the cost of optimality or completeness.

Uploaded by

elfahsym
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Informed Search algorithm

An informed search strategy is a type of search algorithm in artificial intelligence that


utilizes problem-specific knowledge, often in the form of heuristics or cost estimates
to reach a goal. This additional information helps prioritize the paths that are more
likely to lead to an optimal solution, thereby reducing the search space and enhancing
performance.

The primary distinction between this strategy and the uninformed search strategy is
that the node expansion process relies on an evaluation function f(n). This function is
designed to provide a cost estimate, allowing the algorithm to expand the node with
the lowest estimated cost first.

Heuristic function
A heuristic function, in the context of informed search, is a problem-specific function
that estimates the cost or distance from a given state to the goal state. It provides
additional information to guide the search algorithm toward the most promising paths,
thereby improving efficiency. The heuristic function is typically denoted as h(N),
where N represents a node in the search tree and h(G) = 0.

Defining a heuristic function is dependent on the specific problem and requires some
domain knowledge. Here is a clear approach to defining a heuristic function:

Understand the Problem: Begin by analyzing the problem space, identifying the
goal state, and understanding the rules that govern state transitions. Determine
what defines a "good" or "bad" state in relation to the goal.
Identify Key Features: Pinpoint measurable features or properties of the state that
can help estimate the distance or cost to the goal. For example, in pathfinding, one
useful feature could be the straight-line distance (Euclidean distance) to the goal.
Formulate the Heuristic: Develop a function h(N) that utilizes the identified
features to estimate the cost from the current state N to the goal G. The heuristic
should meet the following criteria:
Admissible: It must never overestimate the true cost of reaching the goal,
which is essential for ensuring optimality in algorithms.

h(n) ≤ h∗(n)

Consistent (Monotonic): For every state N and its successor N ′, the


estimated cost should satisfy the condition h(N) ≤ cost(N, N ′) + h(N ′). This
ensures both efficiency and optimality.
Validate the Heuristic: Test the heuristic on sample problems to ensure it offers
useful guidance without overestimating the true cost.

Greedy Best-First Search algorithm


The Greedy Best-First Search algorithm focuses on expanding nodes that are closest
to the goal state by utilizing a heuristic function h(N). To achieve this, we use the
same data structure as the uniform-cost search (UCS) algorithm; however, instead of
ordering the queue elements based on cost, we order them according to the heuristic
values.

Pseudocode
Algorithm Greedy Best-First Search
Input: Start node S , goal node G, heuristic function h(n)
Output: Path from S to G //if found
function greadyBFS(S,G)
Initialize:
frontier ← {S} //Priority queue ordered by h(n)
visited ← ∅
while frontier = ∅ do

Select Best Node:


n ← frontier.pop()
if n = G then
return reconstructPath(n)
end if
Update visited list:
visited.add(n)
Generate Successors and update frontier list:
for each successor m of n do
if m ∈/ {frontier or visited} then

frontier.add(m)
end if
end for
end while
return null //no path found
end function
Example: Route finding problem

Fes
71.3 km
64.5 km
Zagota
Ifrane
63 km

61 km
99.7 km
44.9 km Meknes
50.9 km
122 km
Sidi Kacem Khenifra
21.1 km
Sidi slimane

90.6 km
151 km
65.6 km

Boujad
Kenitra
184 km
52.8 km
56.2 km
Rabat

Khouribga

92.4 km
94.6 km

Berrechid
Casablanca 40 km

**Heuristic function: Euclidean distance


Fes

-327 -409.6

Ifrane
Zagota -306.4

Meknes -345.1
-347.8
-255.7 Sidi Kacem
-402.6
-296.9
Khenifra
Sidi slimane

-243.4
-210.8 -280.6

Kenitra
-145.2 Boujad
-276.4
Rabat

-190

Khouribga
-92.4 -134.6

-134.6

Berrechid
Casablanca -40

Example 8-puzzle problem

Heuristic function:

h(N) = Hamming distance(N) + Manhattan distance(N)


7 2 4 7 2 4 7 4 7 2 4

5 6 5 6 5 2 6 5 3 6

8 3 1 8 3 1 8 3 1 8 1

Algorithm performance analysis


The performance of the best-first search algorithm can be analyzed based on the
following criteria:

Completeness: Best-first search is not guaranteed to be complete. It may fail to


find a solution if it enters an infinite path or gets stuck in a cycle, especially when
using non-admissible heuristics.
Optimality: Best-first search is not guaranteed to be optimal. If the heuristic is
non-admissible (i.e., it overestimates the cost to the goal), the algorithm may not
find the lowest-cost path.
Time Complexity: The time complexity of best-first search depends on the quality
of the heuristic. In the worst case, it can explore all possible states, resulting in
exponential time complexity of O(bd), where b is the branching factor and d is the
maximum depth of the search space.
Space Complexity: The space complexity is also exponential, O(bd), as it stores all
generated nodes in memory. This can be problematic for large search spaces.
Heuristic Dependency: The efficiency of best-first search heavily relies on the
quality of the heuristic function. A well-designed heuristic can significantly reduce
the number of explored states, while a poor heuristic may lead to inefficient
exploration.
Greedy Nature: The algorithm prioritizes nodes that appear closest to the goal
based on the heuristic, which can lead to quick progress. However, this also risks
yielding suboptimal solutions or getting trapped in local optima.
A∗ search algorithm
The A∗ search algorithm is an informed search algorithm that finds the shortest path
from a start state to a goal state by combining the path cost to reach the current state
cost(N) with an estimated cost h(N) to reach the goal from the state N . It uses also a
priority queue to explore states with the lowest total estimated function
f(N) = cost(N) + h(N), ensuring optimality if the heuristic h(n) is admissible (never
overestimates the true cost) and consistent (monotonic). A∗ is both complete and
optimal under these conditions.

Pseudocode
Algorithm A* Search Algorithm with Structured Nodes
Input: Initial state S0 , goal state G

Output: Sequence of actions to goal or null if none exists


function AStar(S0 , G) ​

initialN ode ← {state : s0 , actions : [], parent : null, cost_path : 0, heuristic : h(S0 )}
​ ​

f rontier ← priority queue initialized with initialN ode


visited ← ∅ //Set of explored node states
while f rontier =  ∅ do

current ← f rontier.pop() //Pop the lowest element of the priority queue


if current.test_goal() then
return reconstructPath(current)
end if
visited.add(current)
for each node in neighbors(current) do
if node ∈ visited then
continue
end if
f Score ← node.cost_path + node.heuristic
if node ∈/ f rontier or node.cost_path < getCost(node, frontier) then

if node ∈/ f rontier then


f rontier.add(node)
else
f rontier.update(node)
end if
end if
end for
end while
return null //No path found
end function
Example: Maze problem

(30,26) (28,26) (27,25) (26,24) (25,23) (24,22) (25,21) (26,20) (27,19) (28,18) (29,17) (30,16) (31,15) (32,14) (33,13)

(29,25) (23,21) (29,17)

(22,28) (21,27) (20,26) (30,24) (30,24) (31,23) (22,20) (30,16) (36,14) (37,13) (38,12)

(23,27) (19,25) (21,19) (35,17) (31,15) (35,13)

(24,26) (18,24) (17,23) (16,22) (17,21) (18,20) (19,19) (20,18) (34,16) (33,15) (32,14) (33,13) (34,12)

(25,25) (15,21) (35,15)

(26,24) (27,23) (28,22) (14,20) (39,17) (38,16) (37,15) (36,14)

(27,23) (13,19) (37,13)

(28,22) (10,20) (12,18) (38,12) (39,11) (40,10)


-

(29,21) (9,19) (11,17) (41,9)

(30,20) (8,18) (9,17) (10,16) (42,8) (43,7) (44,6)

(31,19) (7,17) (45,5)

(32,18) (6,16) (46,4)

(33,17) (5,15) (47,3)

(34,16) (3,14) (4,13) (5,12) (51,5) (50,4) (49,3) (48,2) (49,3) (50,4)

(35,15) (2,13) (51,3)

(36,14) (1,12) (52,2) (53,1) G

(0,13)

Algorithm performance analysis


Completeness: A∗ is a complete search algorithm, meaning it will always find a
solution if one exists, provided that the branching factor is finite and the heuristic
used is admissible.
Optimality: A∗ is considered optimal if the heuristic function h(n) is admissible
(never overestimates the true cost to reach the goal) and consistent (monotonic).
Under these conditions, A∗ guarantees the shortest path to the goal.
Time Complexity: The time complexity of A∗ is exponential in the worst case,
expressed as O(bd), where b is the branching factor and d is the depth of the
optimal solution. However, a well-designed heuristic can significantly reduce the
number of nodes explored, improving practical performance.
Space Complexity: The space complexity is also exponential, represented as
O(bd), since A∗ stores all generated nodes in memory. This can create a bottleneck
for large search spaces, making it memory-intensive.
Heuristic Dependency: The efficiency of A∗ heavily depends on the quality of the
heuristic. A well-designed heuristic that closely estimates the true cost (without
overestimating) can greatly reduce the search space, whereas a poor heuristic
may lead to exploring many unnecessary nodes.
Balanced Search: A∗ balances the actual cost to reach the current state cost(N)
and the estimated cost to the goal h(n). This ensures a systematic and efficient
exploration of the search space, making it more effective than purely greedy
approaches like best-first search.

In summary, A∗ is both complete and optimal under the conditions of an admissible


and consistent heuristic. However, its exponential time and space complexity can be
limiting in large or complex search spaces. The quality of the heuristic is critical to its
performance.

Enhance the informed search performance


Memory-Bounded Heuristic Search
It refers to a category of search algorithms that utilize heuristic guidance to explore
problem spaces efficiently while adhering to strict memory constraints. These
algorithms are specifically designed to manage large or complex search spaces where
traditional heuristic search methods, like A∗, may deplete available memory. They
intelligently manage memory usage by discarding less promising nodes or limiting the
depth of their exploration.

Key Characteristics:
Heuristic Guidance: Similar to A∗, these algorithms employ a heuristic function
h(n) to estimate the cost to the goal, thereby directing the search toward more
promising paths.
Memory Management: These algorithms implement strategies to constrain
memory usage, including:
Pruning: Discarding nodes that are unlikely to yield an optimal solution.
Iterative Deepening: Conducting repeated depth-limited searches with
progressively increasing depth limits.
Node Regeneration: Keeping only a subset of nodes in memory and
regenerating others as necessary.
Trade-offs: Memory-bounded algorithms often compromise on optimality or
completeness in favor of lower memory usage, making them practical for large-
scale problems.

Examples of Memory-Bounded Heuristic Search Algorithms:

Iterative Deepening A (IDA):


Combines iterative deepening with A*.
Employs a cost threshold f(n) = cost(n) + h(n) to restrict exploration in each
iteration.
Expands nodes incrementally, discarding those that exceed the threshold and
repeating with an updated threshold.
Although memory-efficient, it may revisit nodes multiple times.
Recursive Best-First Search (RBFS):
A recursive approach that resembles best-first search but limits memory
usage by retaining only the current path and its siblings.
Utilizes a f-limit to prune less promising paths and backtrack when needed.
More memory-efficient than A∗ while also possibly revisiting nodes.
Memory-Bounded A (MA):
A variant of A∗ that discards the least promising nodes when memory is full.
Incorporates a strategy to recover discarded nodes if they become relevant
later.
Simplified Memory-Bounded A (SMA):
An extension of A* that operates within a fixed memory limit.
Discards the least favorable nodes when memory is full but retains
information about their f(N) values for potential regeneration.

Advantages:
Scalability: Capable of managing large search spaces that surpass available
memory.
Efficiency: Reduces memory usage while still employing heuristic guidance for
informed exploration.

Limitations:
Repeated Work: Some algorithms, like IDA*, may revisit nodes, leading to
increased time complexity.
Suboptimality: In some situations, memory constraints may hinder the ability to
find the optimal solution.

You might also like