AI Unit 2 (L7-L16)
AI Unit 2 (L7-L16)
RCS-702
• Initial Solution
• Ability to perform
• Incremental Formulation
– Starts with an empty state. Involves operators that augment the state
description.
– Generates certain sequences
– Less requirement of memory as all states are not explored
– Ex: 8 Queen Problem
• States: Arrangement of up to 8 queens on board
• Initial State: No queen on board
• Successor Function: Add a queen to any square
• Goal Test: All queens are on board. No queen is attacked.
B. Crypt-arithmetic Agent
FORTY
TEN
+ TEN
------------------
SIXTY
------------------
The state space for WJP can be described as a set of ordered pairs of
integers (x,y) such that x=0,1,2,3,or 4 and y= 0,1,2,or 3.
Initial State: is (0,0) and Goal Test is (2,n)
1. {(x, y) | x<4 } (4,y)
2. {(x, y) y<3 } (x,3)
3. {(x, y) x>0 } (0,y)
4. {(x, y) | y>0 } (x,0)
5. {(x, y) | x + y ≥ 4 and y>0} (4, x + y -4 )
6. {(x, y) x + y ≥3 and x>0} (x+y-3, 3)
7. {(x, y) | x+y≤4 and y>0} ( x + y , 0)
8. {(x, y) | x+y≤3 and x>0} (0, x + y)
9. (0,2) (2,0)
10. (2,y) (0,y)
11. { (x , y) | y >0} (x, y-d) Useless rule
12. { (x , y) | x>0 } (x-d, y) Useless rule
B. Robot Navigator
Step 1
• Goal Setting
Step 2
• Goal Formulation
Step 3
• Problem Formulation
Step 4
• Search Formulation
Step 5
• Execution Phase
Problems can be :
• Ignorable (e.g: Theorem Proving)
• Recoverable (e.g : 8 - puzzle)
• Irrecoverable (e.g: Chess , Playing cards)
Note :
1. Ignorable problems can be solved using a simple control structure that
never back tracks. Such a structure is easy to implement.
• If the current node is not the goal add the offspring of the
current in any order to the rear end of the queue and
redefine the front element of the queue as the current.
• Breadth first search is optimal when all step costs are equal.
• Complete? Yes
• Time? O(b[C*/E])
• Space? O(b[C*/E])
• Optimal? Yes
• Complete? No
• Time? O(bl); l is the level of tree
• Space? O(bl)
• Optimal? Maybe
• When the initial depth cut-off is one, it generates only the root
node and examines it.
• If the root node is not the goal, then depth cut-off is set to two
and the tree up to depth 2 is generated using typical depth first
search.
• Complete? Yes
• Time? O(bd); d is the depth of search tree
• Space? O(bd)
• Optimal? Yes if step cost=1 or increasing
function of depth
• Alternate searching from the start state toward the goal and
from the goal state toward the start.
• Works well only when there are unique start and goal
states.
• Complete? Yes
• Time? O(bd/2)
• Space? O(bd/2)
• Optimal? Yes
1. Nodes in the state are searched 1. More info. About initial state &
mechanically, until the goal is operators is available .
reach or time limit is over / failure
occurs.
2. Some info. About goal is always
2. Info about goal state may not be
given.
given
3. Based on heuristic methods
3. Blind grouping is done
4. Searching is fast
4. Search efficiency is low.
5. Less computation required
5. Practical limits on storage
available for blind methods.
6. Impractical to solve very large 6. Can handle large search problems.
problems.
7. Best solution can be achieved. 7. Mostly a good enough solution is
E.g : DFS , BFS , Branch & accepted as optimal solution.
Bound, Iterative Deepening …etc. E.g: Best first search , A* , AO *,
hill climbing…etc
AI- RCS 702 Chahat Sharma (Asst. Prof.)
Informed Search Strategies
• Search strategies like DFS and BFS can find out solutions for
simple problems.
• When more information than the initial state , the operators &
goal state is available, size of search space can usually be
constrained. If this is the case, better the info. available more
efficient is the search process.
This is called Informed Search Methods.
• They are good to the extent that they point in generally interesting
directions . Bad to the extent that they may miss points of interest
to a particular individuals.
• Means Ends Analysis first to solve the major part of a problem and
then go back and solve the small problems arise during combining the
big parts of the problem. Such a technique is called Means-Ends
Analysis.
• Hill Climbing
– Simple Hill Climbing
– Steepest Hill Climbing
• A* Search
(refer AI_Unit2_Greedy BFS_Astar.pdf)
• AO* Search
• Iterative Deepening A*
(refer AI_Unit2_Informed_IDAstar.pdf)
1.INSERT(initial-node,FRINGE)
2.Repeat:
a.If empty(FRINGE) then return failure
b.N REMOVE(FRINGE)
c.s STATE(N)
d.If GOAL?(s) then return path or goal state
e.For every state s’ in SUCCESSORS(s)
i.Create a node N’ as a successor of N
ii.INSERT(N’,FRINGE)
Step-2: Transverse the graph following the current path, accumulating node that has not
yet been expanded or solved.
Step-3: Select any of these nodes and explore it. If it has no successors then call this
value- FUTILITY else calculate f'(n) for each of the successors.
Step-5: Change the value of f'(n) for the newly created node to reflect its successors by
back propagation.
• Complete? Yes
• Optimal? No
• Data Structure? Graph
• Hill Climbing
– (refer AI_Unit2_hill_&_simulated.pdf, also AI_Unit2_hillclimbing.pdf
(slide 136-149))
• Genetic Algorithms
• If parents have better fitness, their offspring will be better than parents
and have a better chance at surviving.
• This process keeps on iterating and at the end, a generation with the
fittest individuals will be found.
1. Fitness Function
The fitness function determines how fit an individual is (the ability of an
individual to compete with other individuals). It gives a fitness
score to each individual. The probability that an individual will be
selected for reproduction is based on its fitness score.
2. Selection
The idea of selection phase is to select the fittest individuals and let them
pass their genes to the next generation.
Two pairs of individuals (parents) are selected based on their fitness
scores. Individuals with high fitness have more chance to be selected
for reproduction.
3. Crossover
Crossover is the most significant phase in a
genetic algorithm. For each pair of parents
to be mated, a crossover point is chosen at
random from within the genes.
For example, consider the crossover point to
be 3 as shown below.
5. Mutation
6. Termination