Informed Search algorithm
Informed Search algorithm
The primary distinction between this strategy and the uninformed search strategy is
that the node expansion process relies on an evaluation function f(n). This function is
designed to provide a cost estimate, allowing the algorithm to expand the node with
the lowest estimated cost first.
Heuristic function
A heuristic function, in the context of informed search, is a problem-specific function
that estimates the cost or distance from a given state to the goal state. It provides
additional information to guide the search algorithm toward the most promising paths,
thereby improving efficiency. The heuristic function is typically denoted as h(N),
where N represents a node in the search tree and h(G) = 0.
Defining a heuristic function is dependent on the specific problem and requires some
domain knowledge. Here is a clear approach to defining a heuristic function:
Understand the Problem: Begin by analyzing the problem space, identifying the
goal state, and understanding the rules that govern state transitions. Determine
what defines a "good" or "bad" state in relation to the goal.
Identify Key Features: Pinpoint measurable features or properties of the state that
can help estimate the distance or cost to the goal. For example, in pathfinding, one
useful feature could be the straight-line distance (Euclidean distance) to the goal.
Formulate the Heuristic: Develop a function h(N) that utilizes the identified
features to estimate the cost from the current state N to the goal G. The heuristic
should meet the following criteria:
Admissible: It must never overestimate the true cost of reaching the goal,
which is essential for ensuring optimality in algorithms.
h(n) ≤ h∗(n)
Pseudocode
Algorithm Greedy Best-First Search
Input: Start node S , goal node G, heuristic function h(n)
Output: Path from S to G //if found
function greadyBFS(S,G)
Initialize:
frontier ← {S} //Priority queue ordered by h(n)
visited ← ∅
while frontier = ∅ do
frontier.add(m)
end if
end for
end while
return null //no path found
end function
Example: Route finding problem
Fes
71.3 km
64.5 km
Zagota
Ifrane
63 km
61 km
99.7 km
44.9 km Meknes
50.9 km
122 km
Sidi Kacem Khenifra
21.1 km
Sidi slimane
90.6 km
151 km
65.6 km
Boujad
Kenitra
184 km
52.8 km
56.2 km
Rabat
Khouribga
92.4 km
94.6 km
Berrechid
Casablanca 40 km
-327 -409.6
Ifrane
Zagota -306.4
Meknes -345.1
-347.8
-255.7 Sidi Kacem
-402.6
-296.9
Khenifra
Sidi slimane
-243.4
-210.8 -280.6
Kenitra
-145.2 Boujad
-276.4
Rabat
-190
Khouribga
-92.4 -134.6
-134.6
Berrechid
Casablanca -40
Heuristic function:
5 6 5 6 5 2 6 5 3 6
8 3 1 8 3 1 8 3 1 8 1
Pseudocode
Algorithm A* Search Algorithm with Structured Nodes
Input: Initial state S0 , goal state G
initialN ode ← {state : s0 , actions : [], parent : null, cost_path : 0, heuristic : h(S0 )}
f rontier.add(node)
else
f rontier.update(node)
end if
end if
end for
end while
return null //No path found
end function
Example: Maze problem
(30,26) (28,26) (27,25) (26,24) (25,23) (24,22) (25,21) (26,20) (27,19) (28,18) (29,17) (30,16) (31,15) (32,14) (33,13)
(22,28) (21,27) (20,26) (30,24) (30,24) (31,23) (22,20) (30,16) (36,14) (37,13) (38,12)
(24,26) (18,24) (17,23) (16,22) (17,21) (18,20) (19,19) (20,18) (34,16) (33,15) (32,14) (33,13) (34,12)
(34,16) (3,14) (4,13) (5,12) (51,5) (50,4) (49,3) (48,2) (49,3) (50,4)
(0,13)
Key Characteristics:
Heuristic Guidance: Similar to A∗, these algorithms employ a heuristic function
h(n) to estimate the cost to the goal, thereby directing the search toward more
promising paths.
Memory Management: These algorithms implement strategies to constrain
memory usage, including:
Pruning: Discarding nodes that are unlikely to yield an optimal solution.
Iterative Deepening: Conducting repeated depth-limited searches with
progressively increasing depth limits.
Node Regeneration: Keeping only a subset of nodes in memory and
regenerating others as necessary.
Trade-offs: Memory-bounded algorithms often compromise on optimality or
completeness in favor of lower memory usage, making them practical for large-
scale problems.
Advantages:
Scalability: Capable of managing large search spaces that surpass available
memory.
Efficiency: Reduces memory usage while still employing heuristic guidance for
informed exploration.
Limitations:
Repeated Work: Some algorithms, like IDA*, may revisit nodes, leading to
increased time complexity.
Suboptimality: In some situations, memory constraints may hinder the ability to
find the optimal solution.